From fca51db699b6254f51b1ef4cafc0751179a2dd4c Mon Sep 17 00:00:00 2001
From: Stefan Bader <stefan.bader@canonical.com>
Date: Tue, 5 Feb 2013 15:22:37 +0100
Subject: [PATCH] UBUNTU: SAUCE: xen/pv-spinlock: Never enable interrupts in xen_spin_lock_slow()
The pv-ops spinlock code for Xen will call xen_spin_lock_slow()
if a lock cannot be obtained quickly by normal spinning. The
slow variant will put the VCPU onto a waiters list, then
enable interrupts (which are event channel messages in this
case) and calls a hypervisor function that is supposed to return
when the lock is released.
Using a test case which has a lot of threads running that cause
a high utilization of this spinlock code, it is observed that
the VCPU which is the first one on the waiters list seems to be
doing other things (no trace of the hv call on the stack) while
the waiters list still has it as the head. So other VCPUs which
try to acquire the lock are stuck in the hv call forever. And
that sooner than later end up in some circular locking.
By testing I can see this gets avoided when the interrupts remain
disabled for the duration of the poll_irq hypercall.
Xen PVM is the only affected setup because that is the only
one using the special spinlock api call which passes on the
flags as they were before (which is used to decide whether
interrupts get re-enabled in the Xen code).
KVM maps that call to the variant which will not look at the
previous state of interrupts and thus never re-enables them.
BugLink: http://bugs.launchpad.net/bugs/1011792
[v2: limited changes and reworded commit message]
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
---
arch/x86/xen/spinlock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
@@ -242,7 +242,7 @@ static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enab
flags = arch_local_save_flags();
if (irq_enable) {
ADD_STATS(taken_slow_irqenable, 1);
- raw_local_irq_enable();
+ /* raw_local_irq_enable(); */
}
/*
@@ -256,7 +256,7 @@ static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enab
*/
xen_poll_irq(irq);
- raw_local_irq_restore(flags);
+ /* raw_local_irq_restore(flags); */
ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
--
1.7.9.5