From patchwork Mon Mar 4 14:00:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuan Liu X-Patchwork-Id: 1908015 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=ND5ivht1; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Tpl4K0NH4z23fC for ; Tue, 5 Mar 2024 16:49:05 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rhNel-0004P4-02; Tue, 05 Mar 2024 00:48:03 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNei-0004Nm-Nq for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:01 -0500 Received: from mgamail.intel.com ([198.175.65.19]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNeg-0003cV-Ed for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709617678; x=1741153678; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8PKySOBMAvS6WGGNBcMzQBYISRMganf4McDyOJ8ToBs=; b=ND5ivht1rDEA/dp37lXew9Hi5iawOGlyOVgeRz3sR7hjxPitNI5rbet2 HUuVZXDzH05U5Bd8YdMtUBtoxnkMR1fCq/1tz+TDu5qvkvKf2xhtpvYR2 sBbdNejKsoWeW7QDtT3PhnBuq8xQ0wr/4c57TEjQbKZLlY9+0Z2jDaG1s vdAFnQoi1dxptx44LrI2CuMTPbNXIEnd1/EngV1E6+LdmZ5yAuwXO8uVD cgqGyy/d8xL/O0bPVJrpswquuNUHIlFw/F96fLFgAsMyGbNpBbsYGArl3 NSvOjjTDu+LO4Yud05Yo57EjCxsdruhUFrlwTF9Y/2oN1IfcVfNcdv9Jz A==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="4006179" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="4006179" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 21:47:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="40135861" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by orviesa002.jf.intel.com with ESMTP; 04 Mar 2024 21:47:53 -0800 From: Yuan Liu To: peterx@redhat.com, farosas@suse.de Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com, bryan.zhang@bytedance.com, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v4 1/8] docs/migration: add qpl compression feature Date: Mon, 4 Mar 2024 22:00:21 +0800 Message-Id: <20240304140028.1590649-2-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240304140028.1590649-1-yuan1.liu@intel.com> References: <20240304140028.1590649-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=198.175.65.19; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.571, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org add QPL compression method introduction Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- docs/devel/migration/features.rst | 1 + docs/devel/migration/qpl-compression.rst | 231 +++++++++++++++++++++++ 2 files changed, 232 insertions(+) create mode 100644 docs/devel/migration/qpl-compression.rst diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst index a9acaf618e..9819393c12 100644 --- a/docs/devel/migration/features.rst +++ b/docs/devel/migration/features.rst @@ -10,3 +10,4 @@ Migration has plenty of features to support different use cases. dirty-limit vfio virtio + qpl-compression diff --git a/docs/devel/migration/qpl-compression.rst b/docs/devel/migration/qpl-compression.rst new file mode 100644 index 0000000000..42c7969d30 --- /dev/null +++ b/docs/devel/migration/qpl-compression.rst @@ -0,0 +1,231 @@ +=============== +QPL Compression +=============== +The Intel Query Processing Library (Intel ``QPL``) is an open-source library to +provide compression and decompression features and it is based on deflate +compression algorithm (RFC 1951). + +The ``QPL`` compression relies on Intel In-Memory Analytics Accelerator(``IAA``) +and Shared Virtual Memory(``SVM``) technology, they are new features supported +from Intel 4th Gen Intel Xeon Scalable processors, codenamed Sapphire Rapids +processor(``SPR``). + +For more ``QPL`` introduction, please refer to: + +https://intel.github.io/qpl/documentation/introduction_docs/introduction.html + +QPL Compression Framework +========================= + +:: + + +----------------+ +------------------+ + | MultiFD Service| |accel-config tool | + +-------+--------+ +--------+---------+ + | | + | | + +-------+--------+ | Setup IAA + | QPL library | | Resources + +-------+---+----+ | + | | | + | +-------------+-------+ + | Open IAA | + | Devices +-----+-----+ + | |idxd driver| + | +-----+-----+ + | | + | | + | +-----+-----+ + +-----------+IAA Devices| + Submit jobs +-----------+ + via enqcmd + + +Intel In-Memory Analytics Accelerator (Intel IAA) Introduction +================================================================ + +Intel ``IAA`` is an accelerator that has been designed to help benefit +in-memory databases and analytic workloads. There are three main areas +that Intel ``IAA`` can assist with analytics primitives (scan, filter, etc.), +sparse data compression and memory tiering. + +``IAA`` Manual Documentation: + +https://www.intel.com/content/www/us/en/content-details/721858/intel-in-memory-analytics-accelerator-architecture-specification + +IAA Device Enabling +------------------- + +- Enabling ``IAA`` devices for platform configuration, please refer to: + +https://www.intel.com/content/www/us/en/content-details/780887/intel-in-memory-analytics-accelerator-intel-iaa.html + +- ``IAA`` device driver is ``Intel Data Accelerator Driver (idxd)``, it is + recommended that the minimum version of Linux kernel is 5.18. + +- Add ``"intel_iommu=on,sm_on"`` parameter to kernel command line + for ``SVM`` feature enabling. + +Here is an easy way to verify ``IAA`` device driver and ``SVM``, refer to: + +https://github.com/intel/idxd-config/tree/stable/test + +IAA Device Management +--------------------- + +The number of ``IAA`` devices will vary depending on the Xeon product model. +On a ``SPR`` server, there can be a maximum of 8 ``IAA`` devices, with up to +4 devices per socket. + +By default, all ``IAA`` devices are disabled and need to be configured and +enabled by users manually. + +Check the number of devices through the following command + +.. code-block:: shell + + # lspci -d 8086:0cfe + # 6a:02.0 System peripheral: Intel Corporation Device 0cfe + # 6f:02.0 System peripheral: Intel Corporation Device 0cfe + # 74:02.0 System peripheral: Intel Corporation Device 0cfe + # 79:02.0 System peripheral: Intel Corporation Device 0cfe + # e7:02.0 System peripheral: Intel Corporation Device 0cfe + # ec:02.0 System peripheral: Intel Corporation Device 0cfe + # f1:02.0 System peripheral: Intel Corporation Device 0cfe + # f6:02.0 System peripheral: Intel Corporation Device 0cfe + +IAA Device Configuration +------------------------ + +The ``accel-config`` tool is used to enable ``IAA`` devices and configure +``IAA`` hardware resources(work queues and engines). One ``IAA`` device +has 8 work queues and 8 processing engines, multiple engines can be assigned +to a work queue via ``group`` attribute. + +One example of configuring and enabling an ``IAA`` device. + +.. code-block:: shell + + # accel-config config-engine iax1/engine1.0 -g 0 + # accel-config config-engine iax1/engine1.1 -g 0 + # accel-config config-engine iax1/engine1.2 -g 0 + # accel-config config-engine iax1/engine1.3 -g 0 + # accel-config config-engine iax1/engine1.4 -g 0 + # accel-config config-engine iax1/engine1.5 -g 0 + # accel-config config-engine iax1/engine1.6 -g 0 + # accel-config config-engine iax1/engine1.7 -g 0 + # accel-config config-wq iax1/wq1.0 -g 0 -s 128 -p 10 -b 1 -t 128 -m shared -y user -n app1 -d user + # accel-config enable-device iax1 + # accel-config enable-wq iax1/wq1.0 + +.. note:: + IAX is an early name for IAA + +- The ``IAA`` device index is 1, use ``ls -lh /sys/bus/dsa/devices/iax*`` + command to query the ``IAA`` device index. + +- 8 engines and 1 work queue are configured in group 0, so all compression jobs + submitted to this work queue can be processed by all engines at the same time. + +- Set work queue attributes including the work mode, work queue size and so on. + +- Enable the ``IAA1`` device and work queue 1.0 + +.. note:: + Set work queue mode to shared mode, since ``QPL`` library only supports + shared mode + +For more detailed configuration, please refer to: + +https://github.com/intel/idxd-config/tree/stable/Documentation/accfg + +IAA Resources Allocation For Migration +-------------------------------------- + +There is no ``IAA`` resource configuration parameters for migration and +``accel-config`` tool configuration cannot directly specify the ``IAA`` +resources used for migration. + +``QPL`` will use all work queues that are enabled and set to shared mode, +and use all engines assigned to the work queues with shared mode. + +By default, ``QPL`` will only use the local ``IAA`` device for compression +job processing. The local ``IAA`` device means that the CPU of the job +submission and the ``IAA`` device are on the same socket, so one CPU +can submit the jobs to up to 4 ``IAA`` devices. + +Shared Virtual Memory(SVM) Introduction +======================================= + +An ability for an accelerator I/O device to operate in the same virtual +memory space of applications on host processors. It also implies the +ability to operate from pageable memory, avoiding functional requirements +to pin memory for DMA operations. + +When using ``SVM`` technology, users do not need to reserve memory for the +``IAA`` device and perform pin memory operation. The ``IAA`` device can +directly access data using the virtual address of the process. + +For more ``SVM`` technology, please refer to: + +https://docs.kernel.org/next/x86/sva.html + + +How To Use QPL Compression In Migration +======================================= + +1 - Installation of ``accel-config`` tool and ``QPL`` library + + - Install ``accel-config`` tool from https://github.com/intel/idxd-config + - Install ``QPL`` library from https://github.com/intel/qpl + +2 - Configure and enable ``IAA`` devices and work queues via ``accel-config`` + +3 - Build ``Qemu`` with ``--enable-qpl`` parameter + + E.g. configure --target-list=x86_64-softmmu --enable-kvm ``--enable-qpl`` + +4 - Start VMs with ``sudo`` command or ``root`` permission + + Use the ``sudo`` command or ``root`` privilege to start the source and + destination virtual machines, since migration service needs permission + to access ``IAA`` hardware resources. + +5 - Enable ``QPL`` compression during migration + + Set ``migrate_set_parameter multifd-compression qpl`` when migrating, the + ``QPL`` compression does not support configuring the compression level, it + only supports one compression level. + +The Difference Between QPL And ZLIB +=================================== + +Although both ``QPL`` and ``ZLIB`` are based on the deflate compression +algorithm, and ``QPL`` can support the header and tail of ``ZLIB``, ``QPL`` +is still not fully compatible with the ``ZLIB`` compression in the migration. + +``QPL`` only supports 4K history buffer, and ``ZLIB`` is 32K by default. The +``ZLIB`` compressed data that ``QPL`` may not decompress correctly and +vice versa. + +``QPL`` does not support the ``Z_SYNC_FLUSH`` operation in ``ZLIB`` streaming +compression, current ``ZLIB`` implementation uses ``Z_SYNC_FLUSH``, so each +``multifd`` thread has a ``ZLIB`` streaming context, and all page compression +and decompression are based on this stream. ``QPL`` cannot decompress such data +and vice versa. + +The introduction for ``Z_SYNC_FLUSH``, please refer to: + +https://www.zlib.net/manual.html + +The Best Practices +================== + +When the virtual machine's pages are not populated and the ``IAA`` device is +used, I/O page faults occur, which can impact performance due to a large number +of flush ``IOTLB`` operations. + +Since the normal pages on the source side are all populated, ``IOTLB`` caused +by I/O page fault will not occur. On the destination side, a large number +of normal pages need to be loaded, so it is recommended to add ``-mem-prealloc`` +parameter on the destination side. From patchwork Mon Mar 4 14:00:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuan Liu X-Patchwork-Id: 1908016 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=ecGCpAyl; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Tpl4K1rpjz23qm for ; Tue, 5 Mar 2024 16:49:05 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rhNem-0004PU-6B; Tue, 05 Mar 2024 00:48:04 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNek-0004Os-8k for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:02 -0500 Received: from mgamail.intel.com ([198.175.65.19]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNei-0003aU-ED for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:01 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709617680; x=1741153680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8o9PzPxYWHyGA94uYLgZFj1vQmFnu0Bc2BJBZXqmcdg=; b=ecGCpAyl7acuWspOUGg3sez98tm3kRNPNkUgE8ts7qdgKxnspkRXpf13 iUHr5X6U+s/5hzmyRheOvYuViW9n1OL/C6jQ/rSscKYcQQTBYEHNW7RcG KVgWXDE4bS4gMTTEiP8iju3EFNmcRGPzb1WrYLtDR3chup+VZQC8qfEMU YfhdXc7HeghKWJuFoA6ZJFb+DsHKttfZPaYNYzzrfBR0qrMkHSfYK6HK9 aHShI04r6NISpxOuw5JBKo5cjuaG4ZPnAtH1txgPQyShKzsUf3qOCAYI+ Q+BZ3u6sh6KE0TMwCuOxPas13Ny9S7FlUPCYyIft44yHoUUvA9+qzZg68 w==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="4006205" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="4006205" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 21:47:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="40135883" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by orviesa002.jf.intel.com with ESMTP; 04 Mar 2024 21:47:57 -0800 From: Yuan Liu To: peterx@redhat.com, farosas@suse.de Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com, bryan.zhang@bytedance.com, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v4 2/8] migration/multifd: add get_iov_count in the multifd method Date: Mon, 4 Mar 2024 22:00:22 +0800 Message-Id: <20240304140028.1590649-3-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240304140028.1590649-1-yuan1.liu@intel.com> References: <20240304140028.1590649-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=198.175.65.19; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.571, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org the new function get_iov_count is used to get the number of IOVs required by a specified multifd method Different multifd methods may require different numbers of IOVs. Based on streaming compression of zlib and zstd, all pages will be compressed to a data block, so an IOV is required to send this data block. For no compression, each IOV is used to send a page, so the number of IOVs required is the same as the number of pages. Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- migration/multifd-zlib.c | 18 +++++++++++++++++- migration/multifd-zstd.c | 18 +++++++++++++++++- migration/multifd.c | 24 +++++++++++++++++++++--- migration/multifd.h | 2 ++ 4 files changed, 57 insertions(+), 5 deletions(-) diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c index 012e3bdea1..35187f2aff 100644 --- a/migration/multifd-zlib.c +++ b/migration/multifd-zlib.c @@ -313,13 +313,29 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp) return 0; } +/** + * zlib_get_iov_count: get the count of IOVs + * + * For zlib streaming compression, all pages will be compressed into a data + * block, and an IOV is requested for sending this block. + * + * Returns the count of the IOVs + * + * @page_count: Indicate the maximum count of pages processed by multifd + */ +static uint32_t zlib_get_iov_count(uint32_t page_count) +{ + return 1; +} + static MultiFDMethods multifd_zlib_ops = { .send_setup = zlib_send_setup, .send_cleanup = zlib_send_cleanup, .send_prepare = zlib_send_prepare, .recv_setup = zlib_recv_setup, .recv_cleanup = zlib_recv_cleanup, - .recv_pages = zlib_recv_pages + .recv_pages = zlib_recv_pages, + .get_iov_count = zlib_get_iov_count }; static void multifd_zlib_register(void) diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c index dc8fe43e94..25ed1add2a 100644 --- a/migration/multifd-zstd.c +++ b/migration/multifd-zstd.c @@ -304,13 +304,29 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp) return 0; } +/** + * zstd_get_iov_count: get the count of IOVs + * + * For zstd streaming compression, all pages will be compressed into a data + * block, and an IOV is requested for sending this block. + * + * Returns the count of the IOVs + * + * @page_count: Indicate the maximum count of pages processed by multifd + */ +static uint32_t zstd_get_iov_count(uint32_t page_count) +{ + return 1; +} + static MultiFDMethods multifd_zstd_ops = { .send_setup = zstd_send_setup, .send_cleanup = zstd_send_cleanup, .send_prepare = zstd_send_prepare, .recv_setup = zstd_recv_setup, .recv_cleanup = zstd_recv_cleanup, - .recv_pages = zstd_recv_pages + .recv_pages = zstd_recv_pages, + .get_iov_count = zstd_get_iov_count }; static void multifd_zstd_register(void) diff --git a/migration/multifd.c b/migration/multifd.c index adfe8c9a0a..787402247e 100644 --- a/migration/multifd.c +++ b/migration/multifd.c @@ -209,13 +209,29 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp) return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp); } +/** + * nocomp_get_iov_count: get the count of IOVs + * + * For no compression, the count of IOVs required is the same as the count of + * pages + * + * Returns the count of the IOVs + * + * @page_count: Indicate the maximum count of pages processed by multifd + */ +static uint32_t nocomp_get_iov_count(uint32_t page_count) +{ + return page_count; +} + static MultiFDMethods multifd_nocomp_ops = { .send_setup = nocomp_send_setup, .send_cleanup = nocomp_send_cleanup, .send_prepare = nocomp_send_prepare, .recv_setup = nocomp_recv_setup, .recv_cleanup = nocomp_recv_cleanup, - .recv_pages = nocomp_recv_pages + .recv_pages = nocomp_recv_pages, + .get_iov_count = nocomp_get_iov_count }; static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = { @@ -998,6 +1014,8 @@ bool multifd_send_setup(void) Error *local_err = NULL; int thread_count, ret = 0; uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size(); + /* We need one extra place for the packet header */ + uint32_t iov_count = 1; uint8_t i; if (!migrate_multifd()) { @@ -1012,6 +1030,7 @@ bool multifd_send_setup(void) qemu_sem_init(&multifd_send_state->channels_ready, 0); qatomic_set(&multifd_send_state->exiting, 0); multifd_send_state->ops = multifd_ops[migrate_multifd_compression()]; + iov_count += multifd_send_state->ops->get_iov_count(page_count); for (i = 0; i < thread_count; i++) { MultiFDSendParams *p = &multifd_send_state->params[i]; @@ -1026,8 +1045,7 @@ bool multifd_send_setup(void) p->packet->magic = cpu_to_be32(MULTIFD_MAGIC); p->packet->version = cpu_to_be32(MULTIFD_VERSION); p->name = g_strdup_printf("multifdsend_%d", i); - /* We need one extra place for the packet header */ - p->iov = g_new0(struct iovec, page_count + 1); + p->iov = g_new0(struct iovec, iov_count); p->page_size = qemu_target_page_size(); p->page_count = page_count; p->write_flags = 0; diff --git a/migration/multifd.h b/migration/multifd.h index 8a1cad0996..d82495c508 100644 --- a/migration/multifd.h +++ b/migration/multifd.h @@ -201,6 +201,8 @@ typedef struct { void (*recv_cleanup)(MultiFDRecvParams *p); /* Read all pages */ int (*recv_pages)(MultiFDRecvParams *p, Error **errp); + /* Get the count of required IOVs */ + uint32_t (*get_iov_count)(uint32_t page_count); } MultiFDMethods; void multifd_register_ops(int method, MultiFDMethods *ops); From patchwork Mon Mar 4 14:00:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuan Liu X-Patchwork-Id: 1908017 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=URH9jwX1; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Tpl4W3FnJz23fC for ; Tue, 5 Mar 2024 16:49:15 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rhNeq-0004QK-KO; Tue, 05 Mar 2024 00:48:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNeo-0004Q4-MB for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:06 -0500 Received: from mgamail.intel.com ([198.175.65.19]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNem-0003d1-MW for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709617684; x=1741153684; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Usm0Cr4TBGAFZbQzTV+QxZ9r1UjnFlad/IwuFFs/hd4=; b=URH9jwX1+otbrdSwnB12RWMvvoEUcJTtveJ3i5hioKg54FKO2jQ1Ip/Z ER/kQ8nhy/LBfz9PCoPqbf8zvxT34aPg1f4sXZDlnCCxgbRqfHgJglWK5 4+zzCeEpqouT9eGwdgw84YM1LFqXFSHKGfgUNHQ7hCTTiWvHl046dKDfc tJRHF9XBMW0sNfRMfqpZFYLDQDDl/OBbmkqIZYeDQ8le49g5eAc2ZPryr lX/c1p3M1CYkuOSQFS+lTCwGtlfCZnNnmsMFdmJ3lfgVJaFLUuea+FXEi 5f/Q+KbGUwhcd9A0KB3beVDsOHjH3k/Micveb1n+IZInCZ6nE+ftTcq71 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="4006232" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="4006232" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 21:48:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="40135902" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by orviesa002.jf.intel.com with ESMTP; 04 Mar 2024 21:48:00 -0800 From: Yuan Liu To: peterx@redhat.com, farosas@suse.de Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com, bryan.zhang@bytedance.com, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v4 3/8] configure: add --enable-qpl build option Date: Mon, 4 Mar 2024 22:00:23 +0800 Message-Id: <20240304140028.1590649-4-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240304140028.1590649-1-yuan1.liu@intel.com> References: <20240304140028.1590649-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=198.175.65.19; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.571, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org add --enable-qpl and --disable-qpl options to enable and disable the QPL compression method for multifd migration. the Query Processing Library (QPL) is an open-source library that supports data compression and decompression features. The QPL compression is based on the deflate compression algorithm and use Intel In-Memory Analytics Accelerator(IAA) hardware for compression and decompression acceleration. Please refer to the following for more information about QPL https://intel.github.io/qpl/documentation/introduction_docs/introduction.html Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- meson.build | 18 ++++++++++++++++++ meson_options.txt | 2 ++ scripts/meson-buildoptions.sh | 3 +++ 3 files changed, 23 insertions(+) diff --git a/meson.build b/meson.build index c1dc83e4c0..2dea1e6834 100644 --- a/meson.build +++ b/meson.build @@ -1197,6 +1197,22 @@ if not get_option('zstd').auto() or have_block required: get_option('zstd'), method: 'pkg-config') endif +qpl = not_found +if not get_option('qpl').auto() + libqpl = cc.find_library('qpl', required: false) + if not libqpl.found() + error('libqpl not found, please install it from ' + + 'https://intel.github.io/qpl/documentation/get_started_docs/installation.html') + endif + libaccel = cc.find_library('accel-config', required: false) + if not libaccel.found() + error('libaccel-config not found, please install it from ' + + 'https://github.com/intel/idxd-config') + endif + qpl = declare_dependency(dependencies: [libqpl, libaccel, + cc.find_library('dl', required: get_option('qpl'))], + link_args: ['-lstdc++']) +endif virgl = not_found have_vhost_user_gpu = have_tools and host_os == 'linux' and pixman.found() @@ -2298,6 +2314,7 @@ config_host_data.set('CONFIG_MALLOC_TRIM', has_malloc_trim) config_host_data.set('CONFIG_STATX', has_statx) config_host_data.set('CONFIG_STATX_MNT_ID', has_statx_mnt_id) config_host_data.set('CONFIG_ZSTD', zstd.found()) +config_host_data.set('CONFIG_QPL', qpl.found()) config_host_data.set('CONFIG_FUSE', fuse.found()) config_host_data.set('CONFIG_FUSE_LSEEK', fuse_lseek.found()) config_host_data.set('CONFIG_SPICE_PROTOCOL', spice_protocol.found()) @@ -4438,6 +4455,7 @@ summary_info += {'snappy support': snappy} summary_info += {'bzip2 support': libbzip2} summary_info += {'lzfse support': liblzfse} summary_info += {'zstd support': zstd} +summary_info += {'Query Processing Library support': qpl} summary_info += {'NUMA host support': numa} summary_info += {'capstone': capstone} summary_info += {'libpmem support': libpmem} diff --git a/meson_options.txt b/meson_options.txt index 0a99a059ec..06cd675572 100644 --- a/meson_options.txt +++ b/meson_options.txt @@ -259,6 +259,8 @@ option('xkbcommon', type : 'feature', value : 'auto', description: 'xkbcommon support') option('zstd', type : 'feature', value : 'auto', description: 'zstd compression support') +option('qpl', type : 'feature', value : 'auto', + description: 'Query Processing Library support') option('fuse', type: 'feature', value: 'auto', description: 'FUSE block device export') option('fuse_lseek', type : 'feature', value : 'auto', diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh index 680fa3f581..784f74fde9 100644 --- a/scripts/meson-buildoptions.sh +++ b/scripts/meson-buildoptions.sh @@ -222,6 +222,7 @@ meson_options_help() { printf "%s\n" ' Xen PCI passthrough support' printf "%s\n" ' xkbcommon xkbcommon support' printf "%s\n" ' zstd zstd compression support' + printf "%s\n" ' qpl Query Processing Library support' } _meson_option_parse() { case $1 in @@ -562,6 +563,8 @@ _meson_option_parse() { --disable-xkbcommon) printf "%s" -Dxkbcommon=disabled ;; --enable-zstd) printf "%s" -Dzstd=enabled ;; --disable-zstd) printf "%s" -Dzstd=disabled ;; + --enable-qpl) printf "%s" -Dqpl=enabled ;; + --disable-qpl) printf "%s" -Dqpl=disabled ;; *) return 1 ;; esac } From patchwork Mon Mar 4 14:00:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuan Liu X-Patchwork-Id: 1908012 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=WYY8jTsH; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Tpl4163p9z23qm for ; Tue, 5 Mar 2024 16:48:47 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rhNf1-0004T1-Cw; Tue, 05 Mar 2024 00:48:19 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNex-0004Sn-6K for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:15 -0500 Received: from mgamail.intel.com ([198.175.65.19]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNev-0003eg-8g for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709617693; x=1741153693; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7sHmriUwSGMAnFTcA5AVBxwma0hCf6t82yyyCrzLIDI=; b=WYY8jTsHBSRlrqyXbaTrljy9wzbVxHJGFDo/S3Tctw+RPPYzsWcrIFVw b91Sv2k6vPuV85jbSCVYX8oFCNUqCV7jYZPrmERV1wOmG5JI2b2vep4vW +GHpAAiXkGdh3IxPAQDG/E9VDIe9CXwEHIQmYVDPFFb2mZ/okdjfTblo/ 9oIKi5ixsKeophk6Eviu/7EkgMhF8S019KyuWLVP9ZdrYHrgVMHTJcnqr AEvFwOylavYUz/s4lrf5mBHURrqv8m8fx+PGWQjNTqnnK1sYbWVKiTSUf RHFT70hIzYKR9W7T4DdApyzOMwYe3FQloyyn4AmVLNtFSMVd44+kFhi2s Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="4006265" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="4006265" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 21:48:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="40135958" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by orviesa002.jf.intel.com with ESMTP; 04 Mar 2024 21:48:09 -0800 From: Yuan Liu To: peterx@redhat.com, farosas@suse.de Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com, bryan.zhang@bytedance.com, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v4 4/8] migration/multifd: add qpl compression method Date: Mon, 4 Mar 2024 22:00:24 +0800 Message-Id: <20240304140028.1590649-5-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240304140028.1590649-1-yuan1.liu@intel.com> References: <20240304140028.1590649-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=198.175.65.19; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.571, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org add the Query Processing Library (QPL) compression method Although both qpl and zlib support deflate compression, qpl will only use the In-Memory Analytics Accelerator(IAA) for compression and decompression, and IAA is not compatible with the Zlib in migration, so qpl is used as a new compression method for migration. How to enable qpl compression during migration: migrate_set_parameter multifd-compression qpl The qpl only supports one compression level, there is no qpl compression level parameter added, users do not need to specify the qpl compression level. Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- hw/core/qdev-properties-system.c | 2 +- migration/meson.build | 1 + migration/multifd-qpl.c | 158 +++++++++++++++++++++++++++++++ migration/multifd.h | 1 + qapi/migration.json | 7 +- 5 files changed, 167 insertions(+), 2 deletions(-) create mode 100644 migration/multifd-qpl.c diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c index 1a396521d5..b4f0e5cbdb 100644 --- a/hw/core/qdev-properties-system.c +++ b/hw/core/qdev-properties-system.c @@ -658,7 +658,7 @@ const PropertyInfo qdev_prop_fdc_drive_type = { const PropertyInfo qdev_prop_multifd_compression = { .name = "MultiFDCompression", .description = "multifd_compression values, " - "none/zlib/zstd", + "none/zlib/zstd/qpl", .enum_table = &MultiFDCompression_lookup, .get = qdev_propinfo_get_enum, .set = qdev_propinfo_set_enum, diff --git a/migration/meson.build b/migration/meson.build index 92b1cc4297..c155c2d781 100644 --- a/migration/meson.build +++ b/migration/meson.build @@ -40,6 +40,7 @@ if get_option('live_block_migration').allowed() system_ss.add(files('block.c')) endif system_ss.add(when: zstd, if_true: files('multifd-zstd.c')) +system_ss.add(when: qpl, if_true: files('multifd-qpl.c')) specific_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_true: files('ram.c', diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c new file mode 100644 index 0000000000..6b94e732ac --- /dev/null +++ b/migration/multifd-qpl.c @@ -0,0 +1,158 @@ +/* + * Multifd qpl compression accelerator implementation + * + * Copyright (c) 2023 Intel Corporation + * + * Authors: + * Yuan Liu + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include "qemu/rcu.h" +#include "exec/ramblock.h" +#include "exec/target_page.h" +#include "qapi/error.h" +#include "migration.h" +#include "trace.h" +#include "options.h" +#include "multifd.h" +#include "qpl/qpl.h" + +struct qpl_data { + qpl_job **job_array; + /* the number of allocated jobs */ + uint32_t job_num; + /* the size of data processed by a qpl job */ + uint32_t data_size; + /* compressed data buffer */ + uint8_t *zbuf; + /* the length of compressed data */ + uint32_t *zbuf_hdr; +}; + +/** + * qpl_send_setup: setup send side + * + * Setup each channel with QPL compression. + * + * Returns 0 for success or -1 for error + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static int qpl_send_setup(MultiFDSendParams *p, Error **errp) +{ + /* Implement in next patch */ + return -1; +} + +/** + * qpl_send_cleanup: cleanup send side + * + * Close the channel and return memory. + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp) +{ + /* Implement in next patch */ +} + +/** + * qpl_send_prepare: prepare data to be able to send + * + * Create a compressed buffer with all the pages that we are going to + * send. + * + * Returns 0 for success or -1 for error + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static int qpl_send_prepare(MultiFDSendParams *p, Error **errp) +{ + /* Implement in next patch */ + return -1; +} + +/** + * qpl_recv_setup: setup receive side + * + * Create the compressed channel and buffer. + * + * Returns 0 for success or -1 for error + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static int qpl_recv_setup(MultiFDRecvParams *p, Error **errp) +{ + /* Implement in next patch */ + return -1; +} + +/** + * qpl_recv_cleanup: setup receive side + * + * Close the channel and return memory. + * + * @p: Params for the channel that we are using + */ +static void qpl_recv_cleanup(MultiFDRecvParams *p) +{ + /* Implement in next patch */ +} + +/** + * qpl_recv_pages: read the data from the channel into actual pages + * + * Read the compressed buffer, and uncompress it into the actual + * pages. + * + * Returns 0 for success or -1 for error + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static int qpl_recv_pages(MultiFDRecvParams *p, Error **errp) +{ + /* Implement in next patch */ + return -1; +} + +/** + * qpl_get_iov_count: get the count of IOVs + * + * For QPL compression, in addition to requesting the same number of IOVs + * as the page, it also requires an additional IOV to store all compressed + * data lengths. + * + * Returns the count of the IOVs + * + * @page_count: Indicate the maximum count of pages processed by multifd + */ +static uint32_t qpl_get_iov_count(uint32_t page_count) +{ + return page_count + 1; +} + +static MultiFDMethods multifd_qpl_ops = { + .send_setup = qpl_send_setup, + .send_cleanup = qpl_send_cleanup, + .send_prepare = qpl_send_prepare, + .recv_setup = qpl_recv_setup, + .recv_cleanup = qpl_recv_cleanup, + .recv_pages = qpl_recv_pages, + .get_iov_count = qpl_get_iov_count +}; + +static void multifd_qpl_register(void) +{ + multifd_register_ops(MULTIFD_COMPRESSION_QPL, &multifd_qpl_ops); +} + +migration_init(multifd_qpl_register); diff --git a/migration/multifd.h b/migration/multifd.h index d82495c508..0e9361df2a 100644 --- a/migration/multifd.h +++ b/migration/multifd.h @@ -33,6 +33,7 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset); #define MULTIFD_FLAG_NOCOMP (0 << 1) #define MULTIFD_FLAG_ZLIB (1 << 1) #define MULTIFD_FLAG_ZSTD (2 << 1) +#define MULTIFD_FLAG_QPL (4 << 1) /* This value needs to be a multiple of qemu_target_page_size() */ #define MULTIFD_PACKET_SIZE (512 * 1024) diff --git a/qapi/migration.json b/qapi/migration.json index 5a565d9b8d..e48e3d7065 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -625,11 +625,16 @@ # # @zstd: use zstd compression method. # +# @qpl: use qpl compression method. Query Processing Library(qpl) is based on +# the deflate compression algorithm and use the Intel In-Memory Analytics +# Accelerator(IAA) hardware accelerated compression and decompression. +# # Since: 5.0 ## { 'enum': 'MultiFDCompression', 'data': [ 'none', 'zlib', - { 'name': 'zstd', 'if': 'CONFIG_ZSTD' } ] } + { 'name': 'zstd', 'if': 'CONFIG_ZSTD' }, + { 'name': 'qpl', 'if': 'CONFIG_QPL' } ] } ## # @MigMode: From patchwork Mon Mar 4 14:00:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuan Liu X-Patchwork-Id: 1908020 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=VJph5R3t; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Tpl5j1pMxz23hX for ; Tue, 5 Mar 2024 16:50:17 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rhNf7-0004UN-LV; Tue, 05 Mar 2024 00:48:25 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNf5-0004Tl-2M for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:23 -0500 Received: from mgamail.intel.com ([198.175.65.19]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNf3-0003ev-85 for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709617701; x=1741153701; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2iEuTnLvQ/OH0PNo/1fvbYIPLyWaF7RSSa/asMCWoTc=; b=VJph5R3tywJxeyzUpVK1b8rHX7/yPABNhv6JMNhnsspibn6morwArBP8 mqAUIo6+h0n1rfF5NfxY0LEcS7Co5WItXuxuUQcfqHAx/aLHmUvPvJ1pE tk29GOx31GgomU8+jdRsXvHa384I1xVkOB67vzaBuWIlscqNeIpwFkwDT s35eoAcF/pDbrci3QmHckAWV1QbGX0USd2BJk9rR4ViW6WNT4l9tFiK4N veDkQvKFsExIruwNwMB6kZpF1blW+KhNoZOfvtttjlDz5gU73i7WBQG0j Ggh9JgSxfibXjk27IYKziAjyvROnjSJ601rH7yTQVVZsf7V/G7zjwVKxX A==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="4006288" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="4006288" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 21:48:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="40135985" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by orviesa002.jf.intel.com with ESMTP; 04 Mar 2024 21:48:17 -0800 From: Yuan Liu To: peterx@redhat.com, farosas@suse.de Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com, bryan.zhang@bytedance.com, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v4 5/8] migration/multifd: implement initialization of qpl compression Date: Mon, 4 Mar 2024 22:00:25 +0800 Message-Id: <20240304140028.1590649-6-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240304140028.1590649-1-yuan1.liu@intel.com> References: <20240304140028.1590649-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=198.175.65.19; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.571, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org the qpl initialization includes memory allocation for compressed data and the qpl job initialization. the qpl initialization will check whether the In-Memory Analytics Accelerator(IAA) hardware is available, if the platform does not have IAA hardware or the IAA hardware is not available, the QPL compression initialization will fail. Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- migration/multifd-qpl.c | 128 ++++++++++++++++++++++++++++++++++++++-- 1 file changed, 122 insertions(+), 6 deletions(-) diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c index 6b94e732ac..f4db97ca01 100644 --- a/migration/multifd-qpl.c +++ b/migration/multifd-qpl.c @@ -33,6 +33,100 @@ struct qpl_data { uint32_t *zbuf_hdr; }; +static void free_zbuf(struct qpl_data *qpl) +{ + if (qpl->zbuf != NULL) { + munmap(qpl->zbuf, qpl->job_num * qpl->data_size); + qpl->zbuf = NULL; + } + if (qpl->zbuf_hdr != NULL) { + g_free(qpl->zbuf_hdr); + qpl->zbuf_hdr = NULL; + } +} + +static int alloc_zbuf(struct qpl_data *qpl, uint8_t chan_id, Error **errp) +{ + int flags = MAP_PRIVATE | MAP_POPULATE | MAP_ANONYMOUS; + uint32_t size = qpl->job_num * qpl->data_size; + uint8_t *buf; + + buf = (uint8_t *) mmap(NULL, size, PROT_READ | PROT_WRITE, flags, -1, 0); + if (buf == MAP_FAILED) { + error_setg(errp, "multifd: %u: alloc_zbuf failed, job num %u, size %u", + chan_id, qpl->job_num, qpl->data_size); + return -1; + } + qpl->zbuf = buf; + qpl->zbuf_hdr = g_new0(uint32_t, qpl->job_num); + return 0; +} + +static void free_jobs(struct qpl_data *qpl) +{ + for (int i = 0; i < qpl->job_num; i++) { + qpl_fini_job(qpl->job_array[i]); + g_free(qpl->job_array[i]); + qpl->job_array[i] = NULL; + } + g_free(qpl->job_array); + qpl->job_array = NULL; +} + +static int alloc_jobs(struct qpl_data *qpl, uint8_t chan_id, Error **errp) +{ + qpl_status status; + uint32_t job_size = 0; + qpl_job *job = NULL; + /* always use IAA hardware accelerator */ + qpl_path_t path = qpl_path_hardware; + + status = qpl_get_job_size(path, &job_size); + if (status != QPL_STS_OK) { + error_setg(errp, "multifd: %u: qpl_get_job_size failed with error %d", + chan_id, status); + return -1; + } + qpl->job_array = g_new0(qpl_job *, qpl->job_num); + for (int i = 0; i < qpl->job_num; i++) { + job = g_malloc0(job_size); + status = qpl_init_job(path, job); + if (status != QPL_STS_OK) { + error_setg(errp, "multifd: %u: qpl_init_job failed with error %d", + chan_id, status); + free_jobs(qpl); + return -1; + } + qpl->job_array[i] = job; + } + return 0; +} + +static int init_qpl(struct qpl_data *qpl, uint32_t job_num, uint32_t data_size, + uint8_t chan_id, Error **errp) +{ + qpl->job_num = job_num; + qpl->data_size = data_size; + if (alloc_zbuf(qpl, chan_id, errp) != 0) { + return -1; + } + if (alloc_jobs(qpl, chan_id, errp) != 0) { + free_zbuf(qpl); + return -1; + } + return 0; +} + +static void deinit_qpl(struct qpl_data *qpl) +{ + if (qpl != NULL) { + free_jobs(qpl); + free_zbuf(qpl); + qpl->job_num = 0; + qpl->data_size = 0; + } +} + /** * qpl_send_setup: setup send side * @@ -45,8 +139,15 @@ struct qpl_data { */ static int qpl_send_setup(MultiFDSendParams *p, Error **errp) { - /* Implement in next patch */ - return -1; + struct qpl_data *qpl; + + qpl = g_new0(struct qpl_data, 1); + if (init_qpl(qpl, p->page_count, p->page_size, p->id, errp) != 0) { + g_free(qpl); + return -1; + } + p->data = qpl; + return 0; } /** @@ -59,7 +160,11 @@ static int qpl_send_setup(MultiFDSendParams *p, Error **errp) */ static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp) { - /* Implement in next patch */ + struct qpl_data *qpl = p->data; + + deinit_qpl(qpl); + g_free(p->data); + p->data = NULL; } /** @@ -91,8 +196,15 @@ static int qpl_send_prepare(MultiFDSendParams *p, Error **errp) */ static int qpl_recv_setup(MultiFDRecvParams *p, Error **errp) { - /* Implement in next patch */ - return -1; + struct qpl_data *qpl; + + qpl = g_new0(struct qpl_data, 1); + if (init_qpl(qpl, p->page_count, p->page_size, p->id, errp) != 0) { + g_free(qpl); + return -1; + } + p->data = qpl; + return 0; } /** @@ -104,7 +216,11 @@ static int qpl_recv_setup(MultiFDRecvParams *p, Error **errp) */ static void qpl_recv_cleanup(MultiFDRecvParams *p) { - /* Implement in next patch */ + struct qpl_data *qpl = p->data; + + deinit_qpl(qpl); + g_free(p->data); + p->data = NULL; } /** From patchwork Mon Mar 4 14:00:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuan Liu X-Patchwork-Id: 1908018 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=OO4r/tof; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Tpl5C2nMMz23fC for ; Tue, 5 Mar 2024 16:49:51 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rhNfA-0004Uv-KK; Tue, 05 Mar 2024 00:48:28 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNf8-0004UQ-TE for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:26 -0500 Received: from mgamail.intel.com ([192.198.163.13]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNf6-0003fB-Rp for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709617705; x=1741153705; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lOg7Z2NiVWexPEJdPMGkjDH7mkPnvTrF6i/RWRX6gvQ=; b=OO4r/tof1F7FwIQ1tu4MDJ1NOQbC3sWxp0wO4oC1U/Pn8qZ4px7VlcgD ilKYhs52tjK4D8jodpzCtBjClZOhQ4EQWncKoirsw6oDcRw2Vo0ZM6NxT +3W2zQyzhsTfjGPGaxNrzSX/CwduXxVoDeLCACxuhtTF8LlojKd9FqSBy QPIJyc2cuu31j+/FuSIAeLE5Y/MihvFQJzvHBnfDKYYUtGDrR0hCIt7w6 n8fBsclkVhsxavQt3qsT5fcU8YfykAS3w+qYmF0EilfooPh1maxu/IkHW IBj9OwCWa8cq/LPzXP4/gjD7xrw847htFMlCuF+4oX5KzHhBaJqi56pEH A==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="7096091" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="7096091" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 21:48:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="46785282" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by orviesa001.jf.intel.com with ESMTP; 04 Mar 2024 21:48:22 -0800 From: Yuan Liu To: peterx@redhat.com, farosas@suse.de Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com, bryan.zhang@bytedance.com, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v4 6/8] migration/multifd: implement qpl compression and decompression Date: Mon, 4 Mar 2024 22:00:26 +0800 Message-Id: <20240304140028.1590649-7-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240304140028.1590649-1-yuan1.liu@intel.com> References: <20240304140028.1590649-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=192.198.163.13; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.571, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org each qpl job is used to (de)compress a normal page and it can be processed independently by the IAA hardware. All qpl jobs are submitted to the hardware at once, and wait for all jobs completion. Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- migration/multifd-qpl.c | 219 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 215 insertions(+), 4 deletions(-) diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c index f4db97ca01..eb815ea3be 100644 --- a/migration/multifd-qpl.c +++ b/migration/multifd-qpl.c @@ -167,6 +167,112 @@ static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp) p->data = NULL; } +static inline void prepare_job(qpl_job *job, uint8_t *input, uint32_t input_len, + uint8_t *output, uint32_t output_len, + bool is_compression) +{ + job->op = is_compression ? qpl_op_compress : qpl_op_decompress; + job->next_in_ptr = input; + job->next_out_ptr = output; + job->available_in = input_len; + job->available_out = output_len; + job->flags = QPL_FLAG_FIRST | QPL_FLAG_LAST | QPL_FLAG_OMIT_VERIFY; + /* only supports one compression level */ + job->level = 1; +} + +/** + * set_raw_data_hdr: set the length of raw data + * + * If the length of the compressed output data is greater than or equal to + * the page size, then set the compressed data length to the data size and + * send raw data directly. + * + * @qpl: pointer to the qpl_data structure + * @index: the index of the compression job header + */ +static inline void set_raw_data_hdr(struct qpl_data *qpl, uint32_t index) +{ + assert(index < qpl->job_num); + qpl->zbuf_hdr[index] = cpu_to_be32(qpl->data_size); +} + +/** + * is_raw_data: check if the data is raw data + * + * The raw data length is always equal to data size, which is the + * size of one page. + * + * Returns true if the data is raw data, otherwise false + * + * @qpl: pointer to the qpl_data structure + * @index: the index of the decompressed job header + */ +static inline bool is_raw_data(struct qpl_data *qpl, uint32_t index) +{ + assert(index < qpl->job_num); + return qpl->zbuf_hdr[index] == qpl->data_size; +} + +static int run_comp_jobs(MultiFDSendParams *p, Error **errp) +{ + qpl_status status; + struct qpl_data *qpl = p->data; + MultiFDPages_t *pages = p->pages; + uint32_t job_num = pages->num; + qpl_job *job = NULL; + uint32_t off = 0; + + assert(job_num <= qpl->job_num); + /* submit all compression jobs */ + for (int i = 0; i < job_num; i++) { + job = qpl->job_array[i]; + /* the compressed data size should be less than one page */ + prepare_job(job, pages->block->host + pages->offset[i], qpl->data_size, + qpl->zbuf + off, qpl->data_size - 1, true); +retry: + status = qpl_submit_job(job); + if (status == QPL_STS_OK) { + off += qpl->data_size; + } else if (status == QPL_STS_QUEUES_ARE_BUSY_ERR) { + goto retry; + } else { + error_setg(errp, "multifd %u: qpl_submit_job failed with error %d", + p->id, status); + return -1; + } + } + + /* wait all jobs to complete */ + for (int i = 0; i < job_num; i++) { + job = qpl->job_array[i]; + status = qpl_wait_job(job); + if (status == QPL_STS_OK) { + qpl->zbuf_hdr[i] = cpu_to_be32(job->total_out); + p->iov[p->iovs_num].iov_len = job->total_out; + p->iov[p->iovs_num].iov_base = qpl->zbuf + (qpl->data_size * i); + p->next_packet_size += job->total_out; + } else if (status == QPL_STS_MORE_OUTPUT_NEEDED) { + /* + * the compression job does not fail, the output data + * size is larger than the provided memory size. In this + * case, raw data is sent directly to the destination. + */ + set_raw_data_hdr(qpl, i); + p->iov[p->iovs_num].iov_len = qpl->data_size; + p->iov[p->iovs_num].iov_base = pages->block->host + + pages->offset[i]; + p->next_packet_size += qpl->data_size; + } else { + error_setg(errp, "multifd %u: qpl_wait_job failed with error %d", + p->id, status); + return -1; + } + p->iovs_num++; + } + return 0; +} + /** * qpl_send_prepare: prepare data to be able to send * @@ -180,8 +286,25 @@ static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp) */ static int qpl_send_prepare(MultiFDSendParams *p, Error **errp) { - /* Implement in next patch */ - return -1; + struct qpl_data *qpl = p->data; + uint32_t hdr_size = p->pages->num * sizeof(uint32_t); + + multifd_send_prepare_header(p); + + assert(p->pages->num <= qpl->job_num); + /* prepare the header that stores the lengths of all compressed data */ + p->iov[1].iov_base = (uint8_t *) qpl->zbuf_hdr; + p->iov[1].iov_len = hdr_size; + p->iovs_num++; + p->next_packet_size += hdr_size; + p->flags |= MULTIFD_FLAG_QPL; + + if (run_comp_jobs(p, errp) != 0) { + return -1; + } + + multifd_send_fill_packet(p); + return 0; } /** @@ -223,6 +346,60 @@ static void qpl_recv_cleanup(MultiFDRecvParams *p) p->data = NULL; } +static int run_decomp_jobs(MultiFDRecvParams *p, Error **errp) +{ + qpl_status status; + qpl_job *job; + struct qpl_data *qpl = p->data; + uint32_t off = 0; + uint32_t job_num = p->normal_num; + + assert(job_num <= qpl->job_num); + /* submit all decompression jobs */ + for (int i = 0; i < job_num; i++) { + /* for the raw data, load it directly */ + if (is_raw_data(qpl, i)) { + memcpy(p->host + p->normal[i], qpl->zbuf + off, qpl->data_size); + off += qpl->data_size; + continue; + } + job = qpl->job_array[i]; + prepare_job(job, qpl->zbuf + off, qpl->zbuf_hdr[i], + p->host + p->normal[i], qpl->data_size, false); +retry: + status = qpl_submit_job(job); + if (status == QPL_STS_OK) { + off += qpl->zbuf_hdr[i]; + } else if (status == QPL_STS_QUEUES_ARE_BUSY_ERR) { + goto retry; + } else { + error_setg(errp, "multifd %u: qpl_submit_job failed with error %d", + p->id, status); + return -1; + } + } + + /* wait all jobs to complete */ + for (int i = 0; i < job_num; i++) { + if (is_raw_data(qpl, i)) { + continue; + } + job = qpl->job_array[i]; + status = qpl_wait_job(job); + if (status != QPL_STS_OK) { + error_setg(errp, "multifd %u: qpl_wait_job failed with error %d", + p->id, status); + return -1; + } + if (job->total_out != qpl->data_size) { + error_setg(errp, "multifd %u: decompressed len %u, expected len %u", + p->id, job->total_out, qpl->data_size); + return -1; + } + } + return 0; +} + /** * qpl_recv_pages: read the data from the channel into actual pages * @@ -236,8 +413,42 @@ static void qpl_recv_cleanup(MultiFDRecvParams *p) */ static int qpl_recv_pages(MultiFDRecvParams *p, Error **errp) { - /* Implement in next patch */ - return -1; + struct qpl_data *qpl = p->data; + uint32_t in_size = p->next_packet_size; + uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK; + uint32_t hdr_len = p->normal_num * sizeof(uint32_t); + uint32_t data_len = 0; + int ret; + + if (flags != MULTIFD_FLAG_QPL) { + error_setg(errp, "multifd %u: flags received %x flags expected %x", + p->id, flags, MULTIFD_FLAG_QPL); + return -1; + } + /* read comprssed data lengths */ + assert(hdr_len < in_size); + ret = qio_channel_read_all(p->c, (void *) qpl->zbuf_hdr, hdr_len, errp); + if (ret != 0) { + return ret; + } + assert(p->normal_num <= qpl->job_num); + for (int i = 0; i < p->normal_num; i++) { + qpl->zbuf_hdr[i] = be32_to_cpu(qpl->zbuf_hdr[i]); + data_len += qpl->zbuf_hdr[i]; + assert(qpl->zbuf_hdr[i] <= qpl->data_size); + } + + /* read comprssed data */ + assert(in_size == hdr_len + data_len); + ret = qio_channel_read_all(p->c, (void *) qpl->zbuf, data_len, errp); + if (ret != 0) { + return ret; + } + + if (run_decomp_jobs(p, errp) != 0) { + return -1; + } + return 0; } /** From patchwork Mon Mar 4 14:00:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuan Liu X-Patchwork-Id: 1908014 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=mcfTUFkL; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Tpl4F2fDlz23fC for ; Tue, 5 Mar 2024 16:49:01 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rhNfC-0004Vh-6O; Tue, 05 Mar 2024 00:48:30 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNfA-0004VC-Vi for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:29 -0500 Received: from mgamail.intel.com ([192.198.163.13]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNf9-0003fB-AS for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709617707; x=1741153707; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I60mSo2+idY+1i00U4px6xKXXI7NrmIgiiKujhHDfHM=; b=mcfTUFkLZQCYsKWEBfm6Knojo9okPOaAPIKwnQ23KwmMFUA20F45QnW/ YGOKJf5SFwtK7NLhoLPnKRs41GZrfgZt0pEQzL+iGdSdsC2Kx9XmaM1Cx 43GBK0T9IxN37CqfTNT1pqZBPpScw9Fu/gSJzmZv/fFKtD/Sy9UpU+n9X SgmkJ7N6kCuqs3rg9e4miZCvPzZvQ7LPR8bV75h5QBUNtXPLwmsLygvfr PCqn9M4aieD9+3V/skU4EySK35et5HiqXDENLYeitt/XQ4k+agLjVsOVl 60BFlbAoX9jIZh7CEyjz0GlKe6fSURp9NzqJ6h9Vrv5uVcV1GNLRklv31 g==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="7096103" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="7096103" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 21:48:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="46785291" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by orviesa001.jf.intel.com with ESMTP; 04 Mar 2024 21:48:25 -0800 From: Yuan Liu To: peterx@redhat.com, farosas@suse.de Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com, bryan.zhang@bytedance.com, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v4 7/8] migration/multifd: fix zlib and zstd compression levels not working Date: Mon, 4 Mar 2024 22:00:27 +0800 Message-Id: <20240304140028.1590649-8-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240304140028.1590649-1-yuan1.liu@intel.com> References: <20240304140028.1590649-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=192.198.163.13; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.571, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org add zlib and zstd compression levels in multifd parameter testing and application and add compression level tests Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou Reported-by: Xiaohui Li --- migration/options.c | 12 ++++++++++++ tests/qtest/migration-test.c | 16 ++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/migration/options.c b/migration/options.c index 3e3e0b93b4..1cd3cc7c33 100644 --- a/migration/options.c +++ b/migration/options.c @@ -1312,6 +1312,12 @@ static void migrate_params_test_apply(MigrateSetParameters *params, if (params->has_multifd_compression) { dest->multifd_compression = params->multifd_compression; } + if (params->has_multifd_zlib_level) { + dest->multifd_zlib_level = params->multifd_zlib_level; + } + if (params->has_multifd_zstd_level) { + dest->multifd_zstd_level = params->multifd_zstd_level; + } if (params->has_xbzrle_cache_size) { dest->xbzrle_cache_size = params->xbzrle_cache_size; } @@ -1447,6 +1453,12 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp) if (params->has_multifd_compression) { s->parameters.multifd_compression = params->multifd_compression; } + if (params->has_multifd_zlib_level) { + s->parameters.multifd_zlib_level = params->multifd_zlib_level; + } + if (params->has_multifd_zstd_level) { + s->parameters.multifd_zstd_level = params->multifd_zstd_level; + } if (params->has_xbzrle_cache_size) { s->parameters.xbzrle_cache_size = params->xbzrle_cache_size; xbzrle_cache_resize(params->xbzrle_cache_size, errp); diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c index 8a5bb1752e..23d50fe599 100644 --- a/tests/qtest/migration-test.c +++ b/tests/qtest/migration-test.c @@ -2621,10 +2621,24 @@ test_migrate_precopy_tcp_multifd_start(QTestState *from, return test_migrate_precopy_tcp_multifd_start_common(from, to, "none"); } +static void +test_and_set_multifd_compression_level(QTestState *who, const char *param) +{ + /* The default compression level is 1, test a level other than 1 */ + int level = 2; + + migrate_set_parameter_int(who, param, level); + migrate_check_parameter_int(who, param, level); + /* only test compression level 1 during migration */ + migrate_set_parameter_int(who, param, 1); +} + static void * test_migrate_precopy_tcp_multifd_zlib_start(QTestState *from, QTestState *to) { + /* the compression level is used only on the source side. */ + test_and_set_multifd_compression_level(from, "multifd-zlib-level"); return test_migrate_precopy_tcp_multifd_start_common(from, to, "zlib"); } @@ -2633,6 +2647,8 @@ static void * test_migrate_precopy_tcp_multifd_zstd_start(QTestState *from, QTestState *to) { + /* the compression level is used only on the source side. */ + test_and_set_multifd_compression_level(from, "multifd-zstd-level"); return test_migrate_precopy_tcp_multifd_start_common(from, to, "zstd"); } #endif /* CONFIG_ZSTD */ From patchwork Mon Mar 4 14:00:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuan Liu X-Patchwork-Id: 1908019 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=DgAxK3IW; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Tpl5F1MXyz23fC for ; Tue, 5 Mar 2024 16:49:53 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rhNfF-0004WE-H1; Tue, 05 Mar 2024 00:48:33 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNfE-0004Vo-2v for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:32 -0500 Received: from mgamail.intel.com ([192.198.163.13]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rhNfC-0003fj-Eu for qemu-devel@nongnu.org; Tue, 05 Mar 2024 00:48:31 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709617710; x=1741153710; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bk1MITvvGkpGttrZKZiUpB7aMVkPIKAAmbCqKzhL4qA=; b=DgAxK3IWbAszbheUcyaHAlFH80LrClpKI2G46EfOUf8NhJc2kgckKC4b Qi8YgaYJfJEeYxvjb7nOkyDChjG1Nm1cvehHxuctlc7oTMtCqbjfmJDmw hA9zNmAcASTFwYWzxtr9SARt42u3a4BpTDqI8Z1h6/dIClLgm89Uefc2i XBIjgLqYdJ9cWtgpfTokEIFsPigdKE/nPn0s2+7JnoaytDLaXiQUFMSmY BwE556BhVH4ZlSCsFVUl9SnBfJeNFMNvHcuTVnwkRQhKZTLMTQVIQG91G y1sbM6co1/8tR5INa+s60W46eyl/XZYvE8vcs2bdlP7hakRG3lSdE6KAl g==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="7096112" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="7096112" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 21:48:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="46785297" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by orviesa001.jf.intel.com with ESMTP; 04 Mar 2024 21:48:28 -0800 From: Yuan Liu To: peterx@redhat.com, farosas@suse.de Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com, bryan.zhang@bytedance.com, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v4 8/8] tests/migration-test: add qpl compression test Date: Mon, 4 Mar 2024 22:00:28 +0800 Message-Id: <20240304140028.1590649-9-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240304140028.1590649-1-yuan1.liu@intel.com> References: <20240304140028.1590649-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=192.198.163.13; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -15 X-Spam_score: -1.6 X-Spam_bar: - X-Spam_report: (-1.6 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.571, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org add qpl to compression method test for multifd migration the migration with qpl compression needs to access IAA hardware resource, please run "check-qtest" with sudo or root permission, otherwise migration test will fail Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- tests/qtest/migration-test.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c index 23d50fe599..96842f9515 100644 --- a/tests/qtest/migration-test.c +++ b/tests/qtest/migration-test.c @@ -2653,6 +2653,15 @@ test_migrate_precopy_tcp_multifd_zstd_start(QTestState *from, } #endif /* CONFIG_ZSTD */ +#ifdef CONFIG_QPL +static void * +test_migrate_precopy_tcp_multifd_qpl_start(QTestState *from, + QTestState *to) +{ + return test_migrate_precopy_tcp_multifd_start_common(from, to, "qpl"); +} +#endif /* CONFIG_QPL */ + static void test_multifd_tcp_none(void) { MigrateCommon args = { @@ -2688,6 +2697,17 @@ static void test_multifd_tcp_zstd(void) } #endif +#ifdef CONFIG_QPL +static void test_multifd_tcp_qpl(void) +{ + MigrateCommon args = { + .listen_uri = "defer", + .start_hook = test_migrate_precopy_tcp_multifd_qpl_start, + }; + test_precopy_common(&args); +} +#endif + #ifdef CONFIG_GNUTLS static void * test_migrate_multifd_tcp_tls_psk_start_match(QTestState *from, @@ -3574,6 +3594,10 @@ int main(int argc, char **argv) migration_test_add("/migration/multifd/tcp/plain/zstd", test_multifd_tcp_zstd); #endif +#ifdef CONFIG_QPL + migration_test_add("/migration/multifd/tcp/plain/qpl", + test_multifd_tcp_qpl); +#endif #ifdef CONFIG_GNUTLS migration_test_add("/migration/multifd/tcp/tls/psk/match", test_multifd_tcp_tls_psk_match);