diff mbox

commit b1bbfe72 causes huge slowdown with no kvm

Message ID CA+hQ2+iknsju27tHW1G=DONGvi0+oRHjGkK56FvcKN=vSdmFWg@mail.gmail.com
State New
Headers show

Commit Message

Luigi Rizzo Nov. 19, 2013, 11 p.m. UTC
I recently found out that without kvm enabled, and especially
with -smp 2 or greater, qemu becomes incredibly slow
(to the point that you can see kernel messages in the
quest print one character at a time).

This happens with a Linux host (even with -smp 1)
and with FreeBSD host (in this case -smp 2 or greater;
-smp 1 seems to work fine there).

Bisecting points to this commit as the culprit:


commit b1bbfe72ec1ebf302d97f886cc646466c0abd679
Author: Alex Bligh <alex@alex.org.uk>
Date:   Wed Aug 21 16:02:55 2013 +0100

    aio / timers: On timer modification, qemu_notify or aio_notify

    On qemu_mod_timer_ns, ensure qemu_notify or aio_notify is called to
    end the appropriate poll(), irrespective of use_icount value.

    On qemu_clock_enable, ensure qemu_notify or aio_notify is called for
    all QEMUTimerLists attached to the QEMUClock.

    Signed-off-by: Alex Bligh <alex@alex.org.uk>
    Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

I could not revert this individual commit into master because
of other changes, but i notice that
one key
changes of the commit

was
 to
 ma
k
e a
 
call to timerlist_notify() unconditional, whereas
before
 
it was controlled by the
"
use_icount
"
variable.

So I tried the small patch below and it seems to restore the original
performance, but I have no idea what use_icount does and
whether the change makes sense.

If not, there is an open problem.

Any ideas ?

cheers
luigi

d
iff --git a/qemu-timer.c b/qemu-timer.c
index e15ce47..4180a86 100644

Comments

Paolo Bonzini Nov. 20, 2013, 9:41 a.m. UTC | #1
Il 20/11/2013 00:00, Luigi Rizzo ha scritto:
> I recently found out that without kvm enabled, and especially
> with -smp 2 or greater, qemu becomes incredibly slow
> (to the point that you can see kernel messages in the
> quest print one character at a time).
> 
> This happens with a Linux host (even with -smp 1)
> and with FreeBSD host (in this case -smp 2 or greater;
> -smp 1 seems to work fine there).


I could not reproduce it; the profile here seems normal, too:

 24,69%  perf-3281.map                       [.] 0x00007f4ab5ac9903
 14,18%  qemu-system-x86_64                  [.] cpu_x86_exec
  2,67%  libglib-2.0.so.0.3800.1             [.] g_source_iter_next
  2,63%  qemu-system-x86_64                  [.] tcg_optimize
  2,47%  qemu-system-x86_64                  [.] helper_pcmpeqb_xmm
  2,28%  qemu-system-x86_64                  [.] phys_page_find
  1,92%  qemu-system-x86_64                  [.] address_space_translate_internal
  1,53%  qemu-system-x86_64                  [.] tcg_liveness_analysis
  1,40%  qemu-system-x86_64                  [.] tcg_reg_alloc_op
  1,17%  qemu-system-x86_64                  [.] helper_psubb_xmm
  0,97%  qemu-system-x86_64                  [.] disas_insn
  0,96%  qemu-system-x86_64                  [.] cpu_x86_handle_mmu_fault
  0,92%  qemu-system-x86_64                  [.] tcg_out_opc
  0,92%  qemu-system-x86_64                  [.] helper_pmovmskb_xmm
  0,91%  qemu-system-x86_64                  [.] tlb_set_page
  0,75%  qemu-system-x86_64                  [.] address_space_translate
  0,74%  qemu-system-x86_64                  [.] ldq_phys
  0,61%  qemu-system-x86_64                  [.] tcg_gen_code
  0,58%  qemu-system-x86_64                  [.] helper_le_ldq_mmu
  0,56%  qemu-system-x86_64                  [.] helper_ctz

This is on Linux, -smp 2, booting a RHEL6.4 guest.

Paolo
Luigi Rizzo Nov. 20, 2013, 11 a.m. UTC | #2
On Wed, Nov 20, 2013 at 10:41:22AM +0100, Paolo Bonzini wrote:
> Il 20/11/2013 00:00, Luigi Rizzo ha scritto:
> > I recently found out that without kvm enabled, and especially
> > with -smp 2 or greater, qemu becomes incredibly slow
> > (to the point that you can see kernel messages in the
> > quest print one character at a time).
> > 
> > This happens with a Linux host (even with -smp 1)
> > and with FreeBSD host (in this case -smp 2 or greater;
> > -smp 1 seems to work fine there).

Here is my test case that shows the problem:

HOW TO REPRODUCE:

    boot a (small) FreeBSD image below:

	http://info.iet.unipi.it/~luigi/20131119-picobsd.bin 

    with this command (note no kvm, also i disconnect the nic
    so i can generate traffic without causing trouble)

	x86_64-softmmu/qemu-system-x86_64 -m 1024 -hda 20131119-picobsd.bin -curses -monitor tcp::4444,server,nowait -net nic,model=e1000 -smp 2

    The image is readonly so you can kill the qemu without problems.

SYMPTOMS:
    as soon as the kernel starts its timer services (there is an
    active kernel thread woken up every millisecons) booting and
    runtime performance in the guest becomes terribly slow.
    See details on the test below.

HOST / OS
    Tried on a linux host (ubuntu 13.10 i believe, with kernel 3.11
    and CPU Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz, but similar
    things occur also on a FreeBSD host. With a FreeBSD host,
    using -smp 1 seems to bypass the problem, whereas on Linux hosts
    the -smp setting does not seem to make a difference.

QEMU VERSION and MODIFICATIONS:

    > git branch -v
    * master     5c5432e Merge remote-tracking branch 'luiz/queue/qmp' into staging

    No changes other than the small diff i proposed to fix the problem:
    > git diff

    diff --git a/qemu-timer.c b/qemu-timer.c
    index e15ce47..4180a86 100644
    --- a/qemu-timer.c
    +++ b/qemu-timer.c
    @@ -380,6 +380,7 @@ static void timerlist_rearm(QEMUTimerList *timer_list)
     {
	 /* Interrupt execution to force deadline recalculation.  */
	 qemu_clock_warp(timer_list->clock->type);
    +if (use_icount) // XXX
	 timerlist_notify(timer_list);
     }

DETAILED SYMPTOMS:

    WITH THE ABOVE PATCH, the kernel boots to a login prompt in 32
    seconds (10 of which are the initial pause, you can skip it
    by pressing enter). Then you can run the "pkt-gen" program which i
    normally use to test netmap performance in the guest, and you
    see between 2 and 3 million packets per second (the NIC is disconnected
    form the host so this should be harmles). Example:

	login: root
	Password: setup
	# pkt-gen -i em0 -f tx

	<should return something between 2 and 3 Mpps>
	<ctrl-c to terminate it >

    WITHOUT THE PATCH, booting becomes slow as soon as the timer tick starts
    and we load dummynet (which also starts a kernel thread every millisecond).
    You should be able to see how the printing of kernel messages slows down
    terribly around this point:

	...
	Timecounters tick every 1.000 msec
	  ipfw2 initialized, divert loadable, nat loadable, default to accept, logging disabled
	  DUMMYNET 0 with IPv6 initialized (100409)
	....

    and then it takes a long/varible time to reach the login prompt,
    easily a couple of minutes or more.
    If you start pkt-gen now, you should see a much lower rate,
    around 300-500Kpps



Since the problem seems timer-related, it makes sense that you are
not seeing much difference in linux which is probably tickless,
and that the trouble comes out on FreeBSD after the timers are
initialized.

But again I have no idea if my patch (which simply reverts part of
the offending commit) makes sense.

	cheers
	luigi


> 
> I could not reproduce it; the profile here seems normal, too:
> 
>  24,69%  perf-3281.map                       [.] 0x00007f4ab5ac9903
>  14,18%  qemu-system-x86_64                  [.] cpu_x86_exec
>   2,67%  libglib-2.0.so.0.3800.1             [.] g_source_iter_next
>   2,63%  qemu-system-x86_64                  [.] tcg_optimize
>   2,47%  qemu-system-x86_64                  [.] helper_pcmpeqb_xmm
>   2,28%  qemu-system-x86_64                  [.] phys_page_find
>   1,92%  qemu-system-x86_64                  [.] address_space_translate_internal
>   1,53%  qemu-system-x86_64                  [.] tcg_liveness_analysis
>   1,40%  qemu-system-x86_64                  [.] tcg_reg_alloc_op
>   1,17%  qemu-system-x86_64                  [.] helper_psubb_xmm
>   0,97%  qemu-system-x86_64                  [.] disas_insn
>   0,96%  qemu-system-x86_64                  [.] cpu_x86_handle_mmu_fault
>   0,92%  qemu-system-x86_64                  [.] tcg_out_opc
>   0,92%  qemu-system-x86_64                  [.] helper_pmovmskb_xmm
>   0,91%  qemu-system-x86_64                  [.] tlb_set_page
>   0,75%  qemu-system-x86_64                  [.] address_space_translate
>   0,74%  qemu-system-x86_64                  [.] ldq_phys
>   0,61%  qemu-system-x86_64                  [.] tcg_gen_code
>   0,58%  qemu-system-x86_64                  [.] helper_le_ldq_mmu
>   0,56%  qemu-system-x86_64                  [.] helper_ctz
> 
> This is on Linux, -smp 2, booting a RHEL6.4 guest.
> 
> Paolo
Paolo Bonzini Nov. 20, 2013, 11:19 a.m. UTC | #3
Il 20/11/2013 12:00, Luigi Rizzo ha scritto:
> 
>     WITHOUT THE PATCH, booting becomes slow as soon as the timer tick starts
>     and we load dummynet (which also starts a kernel thread every millisecond).
>     You should be able to see how the printing of kernel messages slows down
>     terribly around this point:
> 
> 	...
> 	Timecounters tick every 1.000 msec
> 	  ipfw2 initialized, divert loadable, nat loadable, default to accept, logging disabled
> 	  DUMMYNET 0 with IPv6 initialized (100409)
> 	....
> 
>     and then it takes a long/varible time to reach the login prompt,
>     easily a couple of minutes or more.
>     If you start pkt-gen now, you should see a much lower rate,
>     around 300-500Kpps
> 
> 
> 
> Since the problem seems timer-related, it makes sense that you are
> not seeing much difference in linux which is probably tickless,
> and that the trouble comes out on FreeBSD after the timers are
> initialized.

Saw it now with your FreeBSD guest.

> But again I have no idea if my patch (which simply reverts part of
> the offending commit) makes sense.

No, the patch is wrong. :(

Before Alex's patch, setting a timer did a timer_settime system call.

After, setting the timer exits QEMU's event loop (which uses poll) and
reenters it with a new deadline.  This wouldn't be a problem; the
difference is between one system call (timer_settime and a signal
delivery (SIGALRM) before the patch, and two system calls afterwards
(write to a pipe or eventfd + calling poll again when re-entering the
event loop).

Unfortunately, the exit/enter causes the main loop to grab the iothread
lock, which in turns kicks the VCPU thread out of execution.

The problem happens with "-smp 2" because FreeBSD uses a "pause"
instruction in its idle loop, and QEMU does not implement it.  Thus
a lot of time is wasted running the second, idle VCPU rather than
the first.

The fix could be to implement the pause instruction.

Paolo

Paolo
Alex Bligh Nov. 20, 2013, 1:47 p.m. UTC | #4
Paolo,

On 20 Nov 2013, at 11:19, Paolo Bonzini wrote:

> Before Alex's patch, setting a timer did a timer_settime system call.
> 
> After, setting the timer exits QEMU's event loop (which uses poll) and
> reenters it with a new deadline.  This wouldn't be a problem; the
> difference is between one system call (timer_settime and a signal
> delivery (SIGALRM) before the patch, and two system calls afterwards
> (write to a pipe or eventfd + calling poll again when re-entering the
> event loop).
> 
> Unfortunately, the exit/enter causes the main loop to grab the iothread
> lock, which in turns kicks the VCPU thread out of execution.
> 
> The problem happens with "-smp 2" because FreeBSD uses a "pause"
> instruction in its idle loop, and QEMU does not implement it.  Thus
> a lot of time is wasted running the second, idle VCPU rather than
> the first.
> 
> The fix could be to implement the pause instruction.

I think you are saying this sounds like another underlying bug
(no pause instruction) uncovered by the timer patches rather than
a bug in the timer patches.

Reading
 http://software.intel.com/file/27087
perhaps the answer is for pause to yield the lock.

However, what if an arbitrary guest foolishly /does/ do a tight
spinlock without calling pause? If we have a multiprocessor
emulated guest perhaps it should be yielding every few thousand
instructions anyway?
Paolo Bonzini Nov. 20, 2013, 1:55 p.m. UTC | #5
Il 20/11/2013 14:47, Alex Bligh ha scritto:
> I think you are saying this sounds like another underlying bug
> (no pause instruction) uncovered by the timer patches rather than
> a bug in the timer patches.
> 
> Reading
>  http://software.intel.com/file/27087
> perhaps the answer is for pause to yield the lock.
> 
> However, what if an arbitrary guest foolishly /does/ do a tight
> spinlock without calling pause?

It would be a problem.  I don't think it has to be fixed though (and
anyway of course not now).

The fix could be to make TCG use "real" SMP instead of round-robin.

Paolo
diff mbox

Patch

--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -380,6 +380,7 @@  static void timerlist_rearm(QEMUTimerList *timer_list)
 {
     /* Interrupt execution to force deadline recalculation.  */
     qemu_clock_warp(timer_list->clock->type);
+if (use_icount) // XXX
     timerlist_notify(timer_list);
 }