Message ID | 1365169560-11012-1-git-send-email-aliguori@us.ibm.com |
---|---|
State | New |
Headers | show |
On 04/05/2013 07:46 AM, Anthony Liguori wrote: > The char-flow refactoring introduced a busy-wait that depended on > an action from the VCPU thread. However, the VCPU thread could > never take that action because the busy-wait starved the VCPU thread > of the BQL because it never dropped the mutex while running select. > > Paolo doesn't want to drop this optimization for fear that we will > stop detecting these busy waits. I'm afraid to keep this optimization > even with the busy-wait fixed because I think a similar problem can > occur just with heavy I/O thread load manifesting itself as VCPU pauses. > > As a compromise, introduce an artificial timeout after a thousand > iterations but print a rate limited warning when this happens. This > let's us still detect when this condition occurs without it being > a fatal error. > > + * print a message to the screen. If we run into this condition, create > + * an fake timeout in order to give the VCPU threads a chance to run. s/an fake/a fake/
Il 05/04/2013 15:46, Anthony Liguori ha scritto: > The char-flow refactoring introduced a busy-wait that depended on > an action from the VCPU thread. However, the VCPU thread could > never take that action because the busy-wait starved the VCPU thread > of the BQL because it never dropped the mutex while running select. > > Paolo doesn't want to drop this optimization for fear that we will > stop detecting these busy waits. I'm afraid to keep this optimization > even with the busy-wait fixed because I think a similar problem can > occur just with heavy I/O thread load manifesting itself as VCPU pauses. > > As a compromise, introduce an artificial timeout after a thousand > iterations but print a rate limited warning when this happens. This > let's us still detect when this condition occurs without it being > a fatal error. Good idea, thanks! Paolo
Eric Blake <eblake@redhat.com> writes: > On 04/05/2013 07:46 AM, Anthony Liguori wrote: >> The char-flow refactoring introduced a busy-wait that depended on >> an action from the VCPU thread. However, the VCPU thread could >> never take that action because the busy-wait starved the VCPU thread >> of the BQL because it never dropped the mutex while running select. >> >> Paolo doesn't want to drop this optimization for fear that we will >> stop detecting these busy waits. I'm afraid to keep this optimization >> even with the busy-wait fixed because I think a similar problem can >> occur just with heavy I/O thread load manifesting itself as VCPU pauses. >> >> As a compromise, introduce an artificial timeout after a thousand >> iterations but print a rate limited warning when this happens. This >> let's us still detect when this condition occurs without it being >> a fatal error. >> > >> + * print a message to the screen. If we run into this condition, create >> + * an fake timeout in order to give the VCPU threads a chance to run. > > s/an fake/a fake/ Drat, I proof read the commit message hoping to avoid such a mistake but should have reread the comment :-) Regards, Anthony Liguori > > -- > Eric Blake eblake redhat com +1-919-301-3266 > Libvirt virtualization library http://libvirt.org
Hi Anthony, On Fri, Apr 5, 2013 at 11:46 PM, Anthony Liguori <aliguori@us.ibm.com> wrote: > The char-flow refactoring introduced a busy-wait that depended on > an action from the VCPU thread. However, the VCPU thread could > never take that action because the busy-wait starved the VCPU thread > of the BQL because it never dropped the mutex while running select. > > Paolo doesn't want to drop this optimization for fear that we will > stop detecting these busy waits. I'm afraid to keep this optimization > even with the busy-wait fixed because I think a similar problem can > occur just with heavy I/O thread load manifesting itself as VCPU pauses. > > As a compromise, introduce an artificial timeout after a thousand > iterations but print a rate limited warning when this happens. This > let's us still detect when this condition occurs without it being > a fatal error. > > Signed-off-by: Anthony Liguori <aliguori@us.ibm.com> > --- > main-loop.c | 25 +++++++++++++++++++++++++ > 1 file changed, 25 insertions(+) > > diff --git a/main-loop.c b/main-loop.c > index eb80ff3..34a3983 100644 > --- a/main-loop.c > +++ b/main-loop.c > @@ -188,14 +188,39 @@ static void glib_pollfds_poll(void) > } > } > > +#define MAX_MAIN_LOOP_SPIN (1000) > + > static int os_host_main_loop_wait(uint32_t timeout) > { > int ret; > + static int spin_counter; > > glib_pollfds_fill(&timeout); > > + /* If the I/O thread is very busy or we are incorrectly busy waiting in > + * the I/O thread, this can lead to starvation of the BQL such that the > + * VCPU threads never run. To make sure we can detect the later case, > + * print a message to the screen. If we run into this condition, create > + * an fake timeout in order to give the VCPU threads a chance to run. > + */ > + if (spin_counter > MAX_MAIN_LOOP_SPIN) { > + static bool notified; > + > + if (!notified) { > + fprintf(stderr, > + "main-loop: WARNING: I/O thread spun for %d iterations\n", > + MAX_MAIN_LOOP_SPIN); > + notified = true; > + } > + > + timeout = 1; > + } > + > if (timeout > 0) { > + spin_counter = 0; This is too slow. Resetting the spin counter on every unlock means you suffer the 1000 iteration wait every time until the starvation is over. Im getting ten second delays when I mouse wheel paste 40+ chars into the serial. I'll put my fix on list. Regards, Peter > qemu_mutex_unlock_iothread(); > + } else { > + spin_counter++; > } > > ret = g_poll((GPollFD *)gpollfds->data, gpollfds->len, timeout); > -- > 1.8.0 > >
Applied. Thanks. Regards, Anthony Liguori
Am 08.04.2013 23:55, schrieb Anthony Liguori:
> Applied. Thanks.
I am seeing "main-loop: WARNING: I/O thread spun for 1000 iterations"
for both check-qtest-i386 and check-qtest-x86_64 now, is that expected?
PMM just recently cleaned up the noisy arm qtest target... :(
Andreas
Andreas Färber <afaerber@suse.de> writes: > Am 08.04.2013 23:55, schrieb Anthony Liguori: >> Applied. Thanks. > > I am seeing "main-loop: WARNING: I/O thread spun for 1000 iterations" > for both check-qtest-i386 and check-qtest-x86_64 now, is that expected? > > PMM just recently cleaned up the noisy arm qtest target... :( No, it's not expected. Sounds like there's a flow control bug in qtest. Regards, Anthony Liguori > > Andreas > > -- > SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
diff --git a/main-loop.c b/main-loop.c index eb80ff3..34a3983 100644 --- a/main-loop.c +++ b/main-loop.c @@ -188,14 +188,39 @@ static void glib_pollfds_poll(void) } } +#define MAX_MAIN_LOOP_SPIN (1000) + static int os_host_main_loop_wait(uint32_t timeout) { int ret; + static int spin_counter; glib_pollfds_fill(&timeout); + /* If the I/O thread is very busy or we are incorrectly busy waiting in + * the I/O thread, this can lead to starvation of the BQL such that the + * VCPU threads never run. To make sure we can detect the later case, + * print a message to the screen. If we run into this condition, create + * an fake timeout in order to give the VCPU threads a chance to run. + */ + if (spin_counter > MAX_MAIN_LOOP_SPIN) { + static bool notified; + + if (!notified) { + fprintf(stderr, + "main-loop: WARNING: I/O thread spun for %d iterations\n", + MAX_MAIN_LOOP_SPIN); + notified = true; + } + + timeout = 1; + } + if (timeout > 0) { + spin_counter = 0; qemu_mutex_unlock_iothread(); + } else { + spin_counter++; } ret = g_poll((GPollFD *)gpollfds->data, gpollfds->len, timeout);
The char-flow refactoring introduced a busy-wait that depended on an action from the VCPU thread. However, the VCPU thread could never take that action because the busy-wait starved the VCPU thread of the BQL because it never dropped the mutex while running select. Paolo doesn't want to drop this optimization for fear that we will stop detecting these busy waits. I'm afraid to keep this optimization even with the busy-wait fixed because I think a similar problem can occur just with heavy I/O thread load manifesting itself as VCPU pauses. As a compromise, introduce an artificial timeout after a thousand iterations but print a rate limited warning when this happens. This let's us still detect when this condition occurs without it being a fatal error. Signed-off-by: Anthony Liguori <aliguori@us.ibm.com> --- main-loop.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+)