diff mbox

[RFC,6/9] migration: use XBZRLE only after bulk stage

Message ID 513F4F1E.2060400@dlhnet.de
State New
Headers show

Commit Message

Peter Lieven March 12, 2013, 3:51 p.m. UTC
at the beginning of migration all pages are marked dirty and
in the first round a bulk migration of all pages is performed.

currently all these pages are copied to the page cache regardless
if there are frequently updated or not. this doesn't make sense
since most of these pages are never transferred again.

this patch changes the XBZRLE transfer to only be used after
the bulk stage has been completed. that means a page is added
to the page cache the second time it is transferred and XBZRLE
can benefit from the third time of transfer.

since the page cache is likely smaller than the number of pages
its also likely that in the second round the page is missing in the
cache due to collisions in the bulk phase.

on the other hand a lot of unneccssary mallocs, memdups and frees
are saved.

Signed-off-by: Peter Lieven <pl@kamp.de>
---
  arch_init.c |    2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Eric Blake March 12, 2013, 4:03 p.m. UTC | #1
On 03/12/2013 09:51 AM, Peter Lieven wrote:
> at the beginning of migration all pages are marked dirty and
> in the first round a bulk migration of all pages is performed.
> 
> currently all these pages are copied to the page cache regardless
> if there are frequently updated or not. this doesn't make sense
> since most of these pages are never transferred again.
> 
> this patch changes the XBZRLE transfer to only be used after
> the bulk stage has been completed. that means a page is added
> to the page cache the second time it is transferred and XBZRLE
> can benefit from the third time of transfer.
> 
> since the page cache is likely smaller than the number of pages
> its also likely that in the second round the page is missing in the
> cache due to collisions in the bulk phase.
> 
> on the other hand a lot of unneccssary mallocs, memdups and frees

s/unneccssary/unnecessary/

> are saved.

Earlier, I asked you for some benchmark numbers, and you provided some.
 Mentioning them in the commit message will make an even stronger
argument for including this patch.
diff mbox

Patch

diff --git a/arch_init.c b/arch_init.c
index 3d09327..04c82e4 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -451,7 +451,7 @@  static int ram_save_block(QEMUFile *f, bool last_stage)
                                              RAM_SAVE_FLAG_COMPRESS);
                  qemu_put_byte(f, *p);
                  bytes_sent += 1;
-            } else if (migrate_use_xbzrle()) {
+            } else if (!ram_bulk_stage && migrate_use_xbzrle()) {
                  current_addr = block->offset + offset;
                  bytes_sent = save_xbzrle_page(f, p, current_addr, block,
                                                offset, cont, last_stage);