@@ -47,11 +47,25 @@ over any transport.
QEMU interference. Note that QEMU does not flush cached file
data/metadata at the end of migration.
-In addition, support is included for migration using RDMA, which
-transports the page data using ``RDMA``, where the hardware takes care of
-transporting the pages, and the load on the CPU is much lower. While the
-internals of RDMA migration are a bit different, this isn't really visible
-outside the RAM migration code.
+ The file migration also supports using a file that has already been
+ opened. A set of file descriptors is passed to QEMU via an "fdset"
+ (see add-fd QMP command documentation). This method allows a
+ management application to have control over the migration file
+ opening operation. There are, however, strict requirements to this
+ interface if the multifd capability is enabled:
+
+ - the fdset must contain two file descriptors that are not
+ duplicates between themselves;
+ - if the direct-io capability is to be used, exactly one of the
+ file descriptors must have the O_DIRECT flag set;
+ - the file must be opened with WRONLY on the migration source side
+ and RDONLY on the migration destination side.
+
+- rdma migration: support is included for migration using RDMA, which
+ transports the page data using ``RDMA``, where the hardware takes
+ care of transporting the pages, and the load on the CPU is much
+ lower. While the internals of RDMA migration are a bit different,
+ this isn't really visible outside the RAM migration code.
All these migration protocols use the same infrastructure to
save/restore state devices. This infrastructure is shared with the
@@ -16,7 +16,7 @@ location in the file, rather than constantly being added to a
sequential stream. Having the pages at fixed offsets also allows the
usage of O_DIRECT for save/restore of the migration stream as the
pages are ensured to be written respecting O_DIRECT alignment
-restrictions (direct-io support not yet implemented).
+restrictions.
Usage
-----
@@ -35,6 +35,10 @@ Use a ``file:`` URL for migration:
Mapped-ram migration is best done non-live, i.e. by stopping the VM on
the source side before migrating.
+For best performance enable the ``direct-io`` parameter as well:
+
+ ``migrate_set_parameter direct-io on``
+
Use-cases
---------