mbox series

[v5,0/5] Implement QATzip compression method

Message ID 20240711025229.66260-1-yichen.wang@bytedance.com
Headers show
Series Implement QATzip compression method | expand

Message

Yichen Wang July 11, 2024, 2:52 a.m. UTC
v5:
- Rebase changes on top of 59084feb256c617063e0dbe7e64821ae8852d7cf
- Add documentations about migration with qatzip accerlation
- Remove multifd-qatzip-sw-fallback option

v4:
- Rebase changes on top of 1a2d52c7fcaeaaf4f2fe8d4d5183dccaeab67768
- Move the IOV initialization to qatzip implementation
- Only use qatzip to compress normal pages

v3:
- Rebase changes on top of master
- Merge two patches per Fabiano Rosas's comment
- Add versions into comments and documentations

v2:
- Rebase changes on top of recent multifd code changes.
- Use QATzip API 'qzMalloc' and 'qzFree' to allocate QAT buffers.
- Remove parameter tuning and use QATzip's defaults for better
  performance.
- Add parameter to enable QAT software fallback.

v1:
https://lists.nongnu.org/archive/html/qemu-devel/2023-12/msg03761.html

* Performance

We present updated performance results. For circumstantial reasons, v1
presented performance on a low-bandwidth (1Gbps) network.

Here, we present updated results with a similar setup as before but with
two main differences:

1. Our machines have a ~50Gbps connection, tested using 'iperf3'.
2. We had a bug in our memory allocation causing us to only use ~1/2 of
the VM's RAM. Now we properly allocate and fill nearly all of the VM's
RAM.

Thus, the test setup is as follows:

We perform multifd live migration over TCP using a VM with 64GB memory.
We prepare the machine's memory by powering it on, allocating a large
amount of memory (60GB) as a single buffer, and filling the buffer with
the repeated contents of the Silesia corpus[0]. This is in lieu of a more
realistic memory snapshot, which proved troublesome to acquire.

We analyze CPU usage by averaging the output of 'top' every second
during migration. This is admittedly imprecise, but we feel that it
accurately portrays the different degrees of CPU usage of varying
compression methods.

We present the latency, throughput, and CPU usage results for all of the
compression methods, with varying numbers of multifd threads (4, 8, and
16).

[0] The Silesia corpus can be accessed here:
https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia

** Results

4 multifd threads:

    |---------------|---------------|----------------|---------|---------|
    |method         |time(sec)      |throughput(mbps)|send cpu%|recv cpu%|
    |---------------|---------------|----------------|---------|---------|
    |qatzip         | 23.13         | 8749.94        |117.50   |186.49   |
    |---------------|---------------|----------------|---------|---------|
    |zlib           |254.35         |  771.87        |388.20   |144.40   |
    |---------------|---------------|----------------|---------|---------|
    |zstd           | 54.52         | 3442.59        |414.59   |149.77   |
    |---------------|---------------|----------------|---------|---------|
    |none           | 12.45         |43739.60        |159.71   |204.96   |
    |---------------|---------------|----------------|---------|---------|

8 multifd threads:

    |---------------|---------------|----------------|---------|---------|
    |method         |time(sec)      |throughput(mbps)|send cpu%|recv cpu%|
    |---------------|---------------|----------------|---------|---------|
    |qatzip         | 16.91         |12306.52        |186.37   |391.84   |
    |---------------|---------------|----------------|---------|---------|
    |zlib           |130.11         | 1508.89        |753.86   |289.35   |
    |---------------|---------------|----------------|---------|---------|
    |zstd           | 27.57         | 6823.23        |786.83   |303.80   |
    |---------------|---------------|----------------|---------|---------|
    |none           | 11.82         |46072.63        |163.74   |238.56   |
    |---------------|---------------|----------------|---------|---------|

16 multifd threads:

    |---------------|---------------|----------------|---------|---------|
    |method         |time(sec)      |throughput(mbps)|send cpu%|recv cpu%|
    |---------------|---------------|----------------|---------|---------|
    |qatzip         |18.64          |11044.52        | 573.61  |437.65   |
    |---------------|---------------|----------------|---------|---------|
    |zlib           |66.43          | 2955.79        |1469.68  |567.47   |
    |---------------|---------------|----------------|---------|---------|
    |zstd           |14.17          |13290.66        |1504.08  |615.33   |
    |---------------|---------------|----------------|---------|---------|
    |none           |16.82          |32363.26        | 180.74  |217.17   |
    |---------------|---------------|----------------|---------|---------|

** Observations

- In general, not using compression outperforms using compression in a
  non-network-bound environment.
- 'qatzip' outperforms other compression workers with 4 and 8 workers,
  achieving a ~91% latency reduction over 'zlib' with 4 workers, and a
~58% latency reduction over 'zstd' with 4 workers.
- 'qatzip' maintains comparable performance with 'zstd' at 16 workers,
  showing a ~32% increase in latency. This performance difference
becomes more noticeable with more workers, as CPU compression is highly
parallelizable.
- 'qatzip' compression uses considerably less CPU than other compression
  methods. At 8 workers, 'qatzip' demonstrates a ~75% reduction in
compression CPU usage compared to 'zstd' and 'zlib'.
- 'qatzip' decompression CPU usage is less impressive, and is even
  slightly worse than 'zstd' and 'zlib' CPU usage at 4 and 16 workers.


Bryan Zhang (4):
  meson: Introduce 'qatzip' feature to the build system
  migration: Add migration parameters for QATzip
  migration: Introduce 'qatzip' compression method
  tests/migration: Add integration test for 'qatzip' compression method

Yuan Liu (1):
  docs/migration: add qatzip compression feature

 docs/devel/migration/features.rst           |   1 +
 docs/devel/migration/qatzip-compression.rst | 251 ++++++++++++
 hw/core/qdev-properties-system.c            |   6 +-
 meson.build                                 |  10 +
 meson_options.txt                           |   2 +
 migration/meson.build                       |   1 +
 migration/migration-hmp-cmds.c              |   4 +
 migration/multifd-qatzip.c                  | 403 ++++++++++++++++++++
 migration/multifd.h                         |   5 +-
 migration/options.c                         |  34 ++
 migration/options.h                         |   1 +
 qapi/migration.json                         |  21 +
 scripts/meson-buildoptions.sh               |   3 +
 tests/qtest/meson.build                     |   4 +
 tests/qtest/migration-test.c                |  35 ++
 15 files changed, 778 insertions(+), 3 deletions(-)
 create mode 100644 docs/devel/migration/qatzip-compression.rst
 create mode 100644 migration/multifd-qatzip.c

Comments

Peter Xu July 11, 2024, 3:44 p.m. UTC | #1
On Wed, Jul 10, 2024 at 07:52:24PM -0700, Yichen Wang wrote:
> v5:
> - Rebase changes on top of 59084feb256c617063e0dbe7e64821ae8852d7cf
> - Add documentations about migration with qatzip accerlation
> - Remove multifd-qatzip-sw-fallback option

I think Yuan provided quite a few meaningful comments, did you address all
of them?

You didn't reply in the previous version, and you didn't add anything in
the changelog.  I suggest you at least do one of them in the future so that
reviewers can understand what happen.

Thanks,
Yichen Wang July 11, 2024, 4:48 p.m. UTC | #2
On Thu, Jul 11, 2024 at 8:45 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, Jul 10, 2024 at 07:52:24PM -0700, Yichen Wang wrote:
> > v5:
> > - Rebase changes on top of 59084feb256c617063e0dbe7e64821ae8852d7cf
> > - Add documentations about migration with qatzip accerlation
> > - Remove multifd-qatzip-sw-fallback option
>
> I think Yuan provided quite a few meaningful comments, did you address all
> of them?
Yes. I do.
>
> You didn't reply in the previous version, and you didn't add anything in
> the changelog.  I suggest you at least do one of them in the future so that
> reviewers can understand what happen.
They are all very good comments, and instead of replying I just fix
them all and include it in my next patch. In my changelog I do include
all the changes and comments we discussed in v4. Sorry I am new to the
community, so I will reply "fixed" in the previous email before
pushing the next version. Thanks a lot, and sorry for that.
>
> Thanks,
>
> --
> Peter Xu
>
Peter Xu July 11, 2024, 7:36 p.m. UTC | #3
On Thu, Jul 11, 2024 at 09:48:02AM -0700, Yichen Wang wrote:
> On Thu, Jul 11, 2024 at 8:45 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Wed, Jul 10, 2024 at 07:52:24PM -0700, Yichen Wang wrote:
> > > v5:
> > > - Rebase changes on top of 59084feb256c617063e0dbe7e64821ae8852d7cf
> > > - Add documentations about migration with qatzip accerlation
> > > - Remove multifd-qatzip-sw-fallback option
> >
> > I think Yuan provided quite a few meaningful comments, did you address all
> > of them?
> Yes. I do.
> >
> > You didn't reply in the previous version, and you didn't add anything in
> > the changelog.  I suggest you at least do one of them in the future so that
> > reviewers can understand what happen.
> They are all very good comments, and instead of replying I just fix
> them all and include it in my next patch. In my changelog I do include
> all the changes and comments we discussed in v4. Sorry I am new to the
> community, so I will reply "fixed" in the previous email before
> pushing the next version. Thanks a lot, and sorry for that.

That's all fine!  You can definitely mention them too here in the changelog
if you think that's easier.

One last nitpick is in the major patch you duplicated part of the comment
when I was requesting a movement (the part explaining why you used a buffer
rather than submit compression for each page without memcpy), I suggest you
can simply move that whole comment above, rather than copying.

I don't have any further questions on this series.

Thanks,