diff mbox

[V9,1/3] irq: Allow to pass the IRQF_TIMER flag with percpu irq request

Message ID 16042494-2e67-e1a5-b9f6-af57e349d8a7@arm.com
State New
Headers show

Commit Message

Marc Zyngier April 25, 2017, 7:38 a.m. UTC
On 24/04/17 20:59, Daniel Lezcano wrote:
> On Mon, Apr 24, 2017 at 08:14:54PM +0100, Marc Zyngier wrote:
>> On 24/04/17 19:59, Daniel Lezcano wrote:
>>> On Mon, Apr 24, 2017 at 07:46:43PM +0100, Marc Zyngier wrote:
>>>> On 24/04/17 15:01, Daniel Lezcano wrote:
>>>>> In the next changes, we track when the interrupts occur in order to
>>>>> statistically compute when is supposed to happen the next interrupt.
>>>>>
>>>>> In all the interruptions, it does not make sense to store the timer interrupt
>>>>> occurences and try to predict the next interrupt as when know the expiration
>>>>> time.
>>>>>
>>>>> The request_irq() has a irq flags parameter and the timer drivers use it to
>>>>> pass the IRQF_TIMER flag, letting us know the interrupt is coming from a timer.
>>>>> Based on this flag, we can discard these interrupts when tracking them.
>>>>>
>>>>> But, the API request_percpu_irq does not allow to pass a flag, hence specifying
>>>>> if the interrupt type is a timer.
>>>>>
>>>>> Add a function request_percpu_irq_flags() where we can specify the flags. The
>>>>> request_percpu_irq() function is changed to be a wrapper to
>>>>> request_percpu_irq_flags() passing a zero flag parameter.
>>>>>
>>>>> Change the timers using request_percpu_irq() to use request_percpu_irq_flags()
>>>>> instead with the IRQF_TIMER flag set.
>>>>>
>>>>> For now, in order to prevent a misusage of this parameter, only the IRQF_TIMER
>>>>> flag (or zero) is a valid parameter to be passed to the
>>>>> request_percpu_irq_flags() function.
>>>>
>>>> [...]
>>>>
>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>> index 35d7100..602e0a8 100644
>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>> @@ -523,8 +523,9 @@ int kvm_timer_hyp_init(void)
>>>>>  		host_vtimer_irq_flags = IRQF_TRIGGER_LOW;
>>>>>  	}
>>>>>  
>>>>> -	err = request_percpu_irq(host_vtimer_irq, kvm_arch_timer_handler,
>>>>> -				 "kvm guest timer", kvm_get_running_vcpus());
>>>>> +	err = request_percpu_irq_flags(host_vtimer_irq, kvm_arch_timer_handler,
>>>>> +				       IRQF_TIMER, "kvm guest timer",
>>>>> +				       kvm_get_running_vcpus());
>>>>>  	if (err) {
>>>>>  		kvm_err("kvm_arch_timer: can't request interrupt %d (%d)\n",
>>>>>  			host_vtimer_irq, err);
>>>>>
>>>>
>>>> How is that useful? This timer is controlled by the guest OS, and not
>>>> the host kernel. Can you explain how you intend to make use of that
>>>> information in this case?
>>>
>>> Isn't it a source of interruption on the host kernel?
>>
>> Only to cause an exit of the VM, and not under the control of the host.
>> This isn't triggering any timer related action on the host code either.
>>
>> Your patch series seems to assume some kind of predictability of the
>> timer interrupt, which can make sense on the host. Here, this interrupt
>> is shared among *all* guests running on this system.
>>
>> Maybe you could explain why you think this interrupt is relevant to what
>> you're trying to achieve?
> 
> If this interrupt does not happen on the host, we don't care.

All interrupts happen on the host. There is no such thing as a HW 
interrupt being directly delivered to a guest (at least so far). The 
timer is under control of the guest, which uses as it sees fit. When 
the HW timer expires, the interrupt fires on the host, which re-inject 
the interrupt in the guest.

> The flag IRQF_TIMER is used by the spurious irq handler in the try_one_irq()
> function. However the per cpu timer interrupt will be discarded in the function
> before because it is per cpu.

Right. That's not because this is a timer, but because it is per-cpu. 
So why do we need this IRQF_TIMER flag, instead of fixing try_one_irq()?

> IMO, for consistency reason, adding the IRQF_TIMER makes sense. Other than
> that, as the interrupt is not happening on the host, this flag won't be used.
> 
> Do you want to drop this change?

No, I'd like to understand the above. Why isn't the following patch 
doing the right thing?


Thanks,

	M.

Comments

Daniel Lezcano April 25, 2017, 8:34 a.m. UTC | #1
On Tue, Apr 25, 2017 at 08:38:56AM +0100, Marc Zyngier wrote:
> On 24/04/17 20:59, Daniel Lezcano wrote:
> > On Mon, Apr 24, 2017 at 08:14:54PM +0100, Marc Zyngier wrote:
> >> On 24/04/17 19:59, Daniel Lezcano wrote:
> >>> On Mon, Apr 24, 2017 at 07:46:43PM +0100, Marc Zyngier wrote:
> >>>> On 24/04/17 15:01, Daniel Lezcano wrote:
> >>>>> In the next changes, we track when the interrupts occur in order to
> >>>>> statistically compute when is supposed to happen the next interrupt.
> >>>>>
> >>>>> In all the interruptions, it does not make sense to store the timer interrupt
> >>>>> occurences and try to predict the next interrupt as when know the expiration
> >>>>> time.
> >>>>>
> >>>>> The request_irq() has a irq flags parameter and the timer drivers use it to
> >>>>> pass the IRQF_TIMER flag, letting us know the interrupt is coming from a timer.
> >>>>> Based on this flag, we can discard these interrupts when tracking them.
> >>>>>
> >>>>> But, the API request_percpu_irq does not allow to pass a flag, hence specifying
> >>>>> if the interrupt type is a timer.
> >>>>>
> >>>>> Add a function request_percpu_irq_flags() where we can specify the flags. The
> >>>>> request_percpu_irq() function is changed to be a wrapper to
> >>>>> request_percpu_irq_flags() passing a zero flag parameter.
> >>>>>
> >>>>> Change the timers using request_percpu_irq() to use request_percpu_irq_flags()
> >>>>> instead with the IRQF_TIMER flag set.
> >>>>>
> >>>>> For now, in order to prevent a misusage of this parameter, only the IRQF_TIMER
> >>>>> flag (or zero) is a valid parameter to be passed to the
> >>>>> request_percpu_irq_flags() function.
> >>>>
> >>>> [...]
> >>>>
> >>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> >>>>> index 35d7100..602e0a8 100644
> >>>>> --- a/virt/kvm/arm/arch_timer.c
> >>>>> +++ b/virt/kvm/arm/arch_timer.c
> >>>>> @@ -523,8 +523,9 @@ int kvm_timer_hyp_init(void)
> >>>>>  		host_vtimer_irq_flags = IRQF_TRIGGER_LOW;
> >>>>>  	}
> >>>>>  
> >>>>> -	err = request_percpu_irq(host_vtimer_irq, kvm_arch_timer_handler,
> >>>>> -				 "kvm guest timer", kvm_get_running_vcpus());
> >>>>> +	err = request_percpu_irq_flags(host_vtimer_irq, kvm_arch_timer_handler,
> >>>>> +				       IRQF_TIMER, "kvm guest timer",
> >>>>> +				       kvm_get_running_vcpus());
> >>>>>  	if (err) {
> >>>>>  		kvm_err("kvm_arch_timer: can't request interrupt %d (%d)\n",
> >>>>>  			host_vtimer_irq, err);
> >>>>>
> >>>>
> >>>> How is that useful? This timer is controlled by the guest OS, and not
> >>>> the host kernel. Can you explain how you intend to make use of that
> >>>> information in this case?
> >>>
> >>> Isn't it a source of interruption on the host kernel?
> >>
> >> Only to cause an exit of the VM, and not under the control of the host.
> >> This isn't triggering any timer related action on the host code either.
> >>
> >> Your patch series seems to assume some kind of predictability of the
> >> timer interrupt, which can make sense on the host. Here, this interrupt
> >> is shared among *all* guests running on this system.
> >>
> >> Maybe you could explain why you think this interrupt is relevant to what
> >> you're trying to achieve?
> > 
> > If this interrupt does not happen on the host, we don't care.
> 
> All interrupts happen on the host. There is no such thing as a HW 
> interrupt being directly delivered to a guest (at least so far). The 
> timer is under control of the guest, which uses as it sees fit. When 
> the HW timer expires, the interrupt fires on the host, which re-inject 
> the interrupt in the guest.

Ah, thanks for the clarification. Interesting.

How can the host know which guest to re-inject the interrupt?

> > The flag IRQF_TIMER is used by the spurious irq handler in the try_one_irq()
> > function. However the per cpu timer interrupt will be discarded in the function
> > before because it is per cpu.
> 
> Right. That's not because this is a timer, but because it is per-cpu. 
> So why do we need this IRQF_TIMER flag, instead of fixing try_one_irq()?

When a timer is not per cpu, (eg. request_irq), we need this flag, no?

> > IMO, for consistency reason, adding the IRQF_TIMER makes sense. Other than
> > that, as the interrupt is not happening on the host, this flag won't be used.
> > 
> > Do you want to drop this change?
> 
> No, I'd like to understand the above. Why isn't the following patch 
> doing the right thing?

Actually, the explanation is in the next patch of the series (2/3)

[ ... ]

+static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
+{
+	/*
+	 * We don't need the measurement because the idle code already
+	 * knows the next expiry event.
+	 */
+	if (act->flags & __IRQF_TIMER)
+		return;
+
+	desc->istate |= IRQS_TIMINGS;
+}

[ ... ]

+/*
+ * The function record_irq_time is only called in one place in the
+ * interrupts handler. We want this function always inline so the code
+ * inside is embedded in the function and the static key branching
+ * code can act at the higher level. Without the explicit
+ * __always_inline we can end up with a function call and a small
+ * overhead in the hotpath for nothing.
+ */
+static __always_inline void record_irq_time(struct irq_desc *desc)
+{
+	if (!static_branch_likely(&irq_timing_enabled))
+		return;
+
+	if (desc->istate & IRQS_TIMINGS) {
+		struct irq_timings *timings = this_cpu_ptr(&irq_timings);
+
+		timings->values[timings->count & IRQ_TIMINGS_MASK] =
+			irq_timing_encode(local_clock(),
+					  irq_desc_get_irq(desc));
+
+		timings->count++;
+	}
+}

[ ... ]

The purpose is to predict the next event interrupts on the system which are
source of wake up. For now, this patchset is focused on interrupts (discarding
timer interrupts).

The following article gives more details: https://lwn.net/Articles/673641/

When the interrupt is setup, we tag it except if it is a timer. So with this
patch there is another usage of the IRQF_TIMER where we will be ignoring
interrupt coming from a timer.

As the timer interrupt is delivered to the host, we should not measure it as it
is a timer and set this flag.

The needed information is: "what is the earliest VM timer?". If this
information is already available then there is nothing more to do, otherwise we
should add it in the future.

> diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
> index 061ba7eed4ed..a4a81c6c7602 100644
> --- a/kernel/irq/spurious.c
> +++ b/kernel/irq/spurious.c
> @@ -72,6 +72,7 @@ static int try_one_irq(struct irq_desc *desc, bool force)
>  	 * marked polled are excluded from polling.
>  	 */
>  	if (irq_settings_is_per_cpu(desc) ||
> +	    irq_settings_is_per_cpu_devid(desc) ||
>  	    irq_settings_is_nested_thread(desc) ||
>  	    irq_settings_is_polled(desc))
>  		goto out;
> 
> Thanks,
> 
> 	M.
> -- 
> Jazz is not dead. It just smells funny...
Marc Zyngier April 25, 2017, 9:10 a.m. UTC | #2
On 25/04/17 09:34, Daniel Lezcano wrote:
> On Tue, Apr 25, 2017 at 08:38:56AM +0100, Marc Zyngier wrote:
>> On 24/04/17 20:59, Daniel Lezcano wrote:
>>> On Mon, Apr 24, 2017 at 08:14:54PM +0100, Marc Zyngier wrote:
>>>> On 24/04/17 19:59, Daniel Lezcano wrote:
>>>>> On Mon, Apr 24, 2017 at 07:46:43PM +0100, Marc Zyngier wrote:
>>>>>> On 24/04/17 15:01, Daniel Lezcano wrote:
>>>>>>> In the next changes, we track when the interrupts occur in order to
>>>>>>> statistically compute when is supposed to happen the next interrupt.
>>>>>>>
>>>>>>> In all the interruptions, it does not make sense to store the timer interrupt
>>>>>>> occurences and try to predict the next interrupt as when know the expiration
>>>>>>> time.
>>>>>>>
>>>>>>> The request_irq() has a irq flags parameter and the timer drivers use it to
>>>>>>> pass the IRQF_TIMER flag, letting us know the interrupt is coming from a timer.
>>>>>>> Based on this flag, we can discard these interrupts when tracking them.
>>>>>>>
>>>>>>> But, the API request_percpu_irq does not allow to pass a flag, hence specifying
>>>>>>> if the interrupt type is a timer.
>>>>>>>
>>>>>>> Add a function request_percpu_irq_flags() where we can specify the flags. The
>>>>>>> request_percpu_irq() function is changed to be a wrapper to
>>>>>>> request_percpu_irq_flags() passing a zero flag parameter.
>>>>>>>
>>>>>>> Change the timers using request_percpu_irq() to use request_percpu_irq_flags()
>>>>>>> instead with the IRQF_TIMER flag set.
>>>>>>>
>>>>>>> For now, in order to prevent a misusage of this parameter, only the IRQF_TIMER
>>>>>>> flag (or zero) is a valid parameter to be passed to the
>>>>>>> request_percpu_irq_flags() function.
>>>>>>
>>>>>> [...]
>>>>>>
>>>>>>> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>>>>>>> index 35d7100..602e0a8 100644
>>>>>>> --- a/virt/kvm/arm/arch_timer.c
>>>>>>> +++ b/virt/kvm/arm/arch_timer.c
>>>>>>> @@ -523,8 +523,9 @@ int kvm_timer_hyp_init(void)
>>>>>>>  		host_vtimer_irq_flags = IRQF_TRIGGER_LOW;
>>>>>>>  	}
>>>>>>>  
>>>>>>> -	err = request_percpu_irq(host_vtimer_irq, kvm_arch_timer_handler,
>>>>>>> -				 "kvm guest timer", kvm_get_running_vcpus());
>>>>>>> +	err = request_percpu_irq_flags(host_vtimer_irq, kvm_arch_timer_handler,
>>>>>>> +				       IRQF_TIMER, "kvm guest timer",
>>>>>>> +				       kvm_get_running_vcpus());
>>>>>>>  	if (err) {
>>>>>>>  		kvm_err("kvm_arch_timer: can't request interrupt %d (%d)\n",
>>>>>>>  			host_vtimer_irq, err);
>>>>>>>
>>>>>>
>>>>>> How is that useful? This timer is controlled by the guest OS, and not
>>>>>> the host kernel. Can you explain how you intend to make use of that
>>>>>> information in this case?
>>>>>
>>>>> Isn't it a source of interruption on the host kernel?
>>>>
>>>> Only to cause an exit of the VM, and not under the control of the host.
>>>> This isn't triggering any timer related action on the host code either.
>>>>
>>>> Your patch series seems to assume some kind of predictability of the
>>>> timer interrupt, which can make sense on the host. Here, this interrupt
>>>> is shared among *all* guests running on this system.
>>>>
>>>> Maybe you could explain why you think this interrupt is relevant to what
>>>> you're trying to achieve?
>>>
>>> If this interrupt does not happen on the host, we don't care.
>>
>> All interrupts happen on the host. There is no such thing as a HW 
>> interrupt being directly delivered to a guest (at least so far). The 
>> timer is under control of the guest, which uses as it sees fit. When 
>> the HW timer expires, the interrupt fires on the host, which re-inject 
>> the interrupt in the guest.
> 
> Ah, thanks for the clarification. Interesting.
> 
> How can the host know which guest to re-inject the interrupt?

The timer can only fire when the vcpu is running. If it is not running,
a software timer is queued, with a pointer to the vcpu struct.

>>> The flag IRQF_TIMER is used by the spurious irq handler in the try_one_irq()
>>> function. However the per cpu timer interrupt will be discarded in the function
>>> before because it is per cpu.
>>
>> Right. That's not because this is a timer, but because it is per-cpu. 
>> So why do we need this IRQF_TIMER flag, instead of fixing try_one_irq()?
> 
> When a timer is not per cpu, (eg. request_irq), we need this flag, no?

Sure, but in this series, they all seem to be per-cpu.

>>> IMO, for consistency reason, adding the IRQF_TIMER makes sense. Other than
>>> that, as the interrupt is not happening on the host, this flag won't be used.
>>>
>>> Do you want to drop this change?
>>
>> No, I'd like to understand the above. Why isn't the following patch 
>> doing the right thing?
> 
> Actually, the explanation is in the next patch of the series (2/3)
> 
> [ ... ]
> 
> +static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
> +{
> +	/*
> +	 * We don't need the measurement because the idle code already
> +	 * knows the next expiry event.
> +	 */
> +	if (act->flags & __IRQF_TIMER)
> +		return;

And that's where this is really wrong for the KVM guest timer. As I
said, this timer is under complete control of the guest, and the rest of
the system doesn't know about it. KVM itself will only find out when the
vcpu does a VM exit for a reason or another, and will just save/restore
the state in order to be able to give the timer to another guest.

The idle code is very much *not* aware of anything concerning that guest
timer.

> +
> +	desc->istate |= IRQS_TIMINGS;
> +}
> 
> [ ... ]
> 
> +/*
> + * The function record_irq_time is only called in one place in the
> + * interrupts handler. We want this function always inline so the code
> + * inside is embedded in the function and the static key branching
> + * code can act at the higher level. Without the explicit
> + * __always_inline we can end up with a function call and a small
> + * overhead in the hotpath for nothing.
> + */
> +static __always_inline void record_irq_time(struct irq_desc *desc)
> +{
> +	if (!static_branch_likely(&irq_timing_enabled))
> +		return;
> +
> +	if (desc->istate & IRQS_TIMINGS) {
> +		struct irq_timings *timings = this_cpu_ptr(&irq_timings);
> +
> +		timings->values[timings->count & IRQ_TIMINGS_MASK] =
> +			irq_timing_encode(local_clock(),
> +					  irq_desc_get_irq(desc));
> +
> +		timings->count++;
> +	}
> +}
> 
> [ ... ]
> 
> The purpose is to predict the next event interrupts on the system which are
> source of wake up. For now, this patchset is focused on interrupts (discarding
> timer interrupts).
> 
> The following article gives more details: https://lwn.net/Articles/673641/
> 
> When the interrupt is setup, we tag it except if it is a timer. So with this
> patch there is another usage of the IRQF_TIMER where we will be ignoring
> interrupt coming from a timer.
> 
> As the timer interrupt is delivered to the host, we should not measure it as it
> is a timer and set this flag.
> 
> The needed information is: "what is the earliest VM timer?". If this
> information is already available then there is nothing more to do, otherwise we
> should add it in the future.

This information is not readily available. You can only find it when it
is too late (timer has already fired) or when it is not relevant anymore
(guest is sleeping and we've queued a SW timer for it).

Thanks,

	M.
Daniel Lezcano April 25, 2017, 9:49 a.m. UTC | #3
On Tue, Apr 25, 2017 at 10:10:12AM +0100, Marc Zyngier wrote:

[ ... ]

> >>>> Maybe you could explain why you think this interrupt is relevant to what
> >>>> you're trying to achieve?
> >>>
> >>> If this interrupt does not happen on the host, we don't care.
> >>
> >> All interrupts happen on the host. There is no such thing as a HW 
> >> interrupt being directly delivered to a guest (at least so far). The 
> >> timer is under control of the guest, which uses as it sees fit. When 
> >> the HW timer expires, the interrupt fires on the host, which re-inject 
> >> the interrupt in the guest.
> > 
> > Ah, thanks for the clarification. Interesting.
> > 
> > How can the host know which guest to re-inject the interrupt?
> 
> The timer can only fire when the vcpu is running. If it is not running,
> a software timer is queued, with a pointer to the vcpu struct.

I see, thanks.

> >>> The flag IRQF_TIMER is used by the spurious irq handler in the try_one_irq()
> >>> function. However the per cpu timer interrupt will be discarded in the function
> >>> before because it is per cpu.
> >>
> >> Right. That's not because this is a timer, but because it is per-cpu. 
> >> So why do we need this IRQF_TIMER flag, instead of fixing try_one_irq()?
> > 
> > When a timer is not per cpu, (eg. request_irq), we need this flag, no?
> 
> Sure, but in this series, they all seem to be per-cpu.

I think I was unclear. We need to tag an interrupt with IRQS_TIMINGS to record
their occurences but discarding the timers interrupt. That is done by checking
against IRQF_TIMER when setting up an interrupt.

request_irq() has a flag parameter which has IRQF_TIMER set in case of the
timers. request_percpu_irq has no flag parameter, so it is not possible to
discard these interrupts as the IRQS_TIMINGS will be set.

I don't understand how this is related to the the try_one_irq() fix you are
proposing. Am I missing something?

Regarding your description below, the host has no control at all on the virtual
timer and is not able to know the next expiration time, so I don't see the
point to add the IRQF_TIMER flag to the virtual timer.

I will resend a new version without this change on the virtual timer.

> >>> IMO, for consistency reason, adding the IRQF_TIMER makes sense. Other than
> >>> that, as the interrupt is not happening on the host, this flag won't be used.
> >>>
> >>> Do you want to drop this change?
> >>
> >> No, I'd like to understand the above. Why isn't the following patch 
> >> doing the right thing?
> > 
> > Actually, the explanation is in the next patch of the series (2/3)
> > 
> > [ ... ]
> > 
> > +static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
> > +{
> > +	/*
> > +	 * We don't need the measurement because the idle code already
> > +	 * knows the next expiry event.
> > +	 */
> > +	if (act->flags & __IRQF_TIMER)
> > +		return;
> 
> And that's where this is really wrong for the KVM guest timer. As I
> said, this timer is under complete control of the guest, and the rest of
> the system doesn't know about it. KVM itself will only find out when the
> vcpu does a VM exit for a reason or another, and will just save/restore
> the state in order to be able to give the timer to another guest.
> 
> The idle code is very much *not* aware of anything concerning that guest
> timer.

Just for my own curiosity, if there are two VM (VM1 and VM2). VM1 sets a timer1
at <time> and exits, VM2 runs and sets a timer2 at <time+delta>.

The timer1 for VM1 is supposed to expire while VM2 is running. IIUC the virtual
timer is under control of VM2 and will expire at <time+delta>.

Is the host wake up with the SW timer and switch in VM1 which in turn restores
the timer and jump in the virtual timer irq handler?
 
> > +
> > +	desc->istate |= IRQS_TIMINGS;
> > +}
> > 
> > [ ... ]
> > 
> > +/*
> > + * The function record_irq_time is only called in one place in the
> > + * interrupts handler. We want this function always inline so the code
> > + * inside is embedded in the function and the static key branching
> > + * code can act at the higher level. Without the explicit
> > + * __always_inline we can end up with a function call and a small
> > + * overhead in the hotpath for nothing.
> > + */
> > +static __always_inline void record_irq_time(struct irq_desc *desc)
> > +{
> > +	if (!static_branch_likely(&irq_timing_enabled))
> > +		return;
> > +
> > +	if (desc->istate & IRQS_TIMINGS) {
> > +		struct irq_timings *timings = this_cpu_ptr(&irq_timings);
> > +
> > +		timings->values[timings->count & IRQ_TIMINGS_MASK] =
> > +			irq_timing_encode(local_clock(),
> > +					  irq_desc_get_irq(desc));
> > +
> > +		timings->count++;
> > +	}
> > +}
> > 
> > [ ... ]
> > 
> > The purpose is to predict the next event interrupts on the system which are
> > source of wake up. For now, this patchset is focused on interrupts (discarding
> > timer interrupts).
> > 
> > The following article gives more details: https://lwn.net/Articles/673641/
> > 
> > When the interrupt is setup, we tag it except if it is a timer. So with this
> > patch there is another usage of the IRQF_TIMER where we will be ignoring
> > interrupt coming from a timer.
> > 
> > As the timer interrupt is delivered to the host, we should not measure it as it
> > is a timer and set this flag.
> > 
> > The needed information is: "what is the earliest VM timer?". If this
> > information is already available then there is nothing more to do, otherwise we
> > should add it in the future.
> 
> This information is not readily available. You can only find it when it
> is too late (timer has already fired) or when it is not relevant anymore
> (guest is sleeping and we've queued a SW timer for it).
> 
> Thanks,
> 
> 	M.
> -- 
> Jazz is not dead. It just smells funny...
Marc Zyngier April 25, 2017, 10:21 a.m. UTC | #4
On 25/04/17 10:49, Daniel Lezcano wrote:
> On Tue, Apr 25, 2017 at 10:10:12AM +0100, Marc Zyngier wrote:

[...]

>>> +static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
>>> +{
>>> +	/*
>>> +	 * We don't need the measurement because the idle code already
>>> +	 * knows the next expiry event.
>>> +	 */
>>> +	if (act->flags & __IRQF_TIMER)
>>> +		return;
>>
>> And that's where this is really wrong for the KVM guest timer. As I
>> said, this timer is under complete control of the guest, and the rest of
>> the system doesn't know about it. KVM itself will only find out when the
>> vcpu does a VM exit for a reason or another, and will just save/restore
>> the state in order to be able to give the timer to another guest.
>>
>> The idle code is very much *not* aware of anything concerning that guest
>> timer.
> 
> Just for my own curiosity, if there are two VM (VM1 and VM2). VM1 sets a timer1
> at <time> and exits, VM2 runs and sets a timer2 at <time+delta>.
> 
> The timer1 for VM1 is supposed to expire while VM2 is running. IIUC the virtual
> timer is under control of VM2 and will expire at <time+delta>.
> 
> Is the host wake up with the SW timer and switch in VM1 which in turn restores
> the timer and jump in the virtual timer irq handler?

Indeed. The SW timer causes VM1 to wake-up, either on the same CPU
(preempting VM2) or on another. The timer is then restored with the
pending virtual interrupt injected, and the guest does what it has to
with it.

Thanks,

	M.
Christoffer Dall April 25, 2017, 10:22 a.m. UTC | #5
On Tue, Apr 25, 2017 at 11:49:27AM +0200, Daniel Lezcano wrote:

[...]

> > 
> > The idle code is very much *not* aware of anything concerning that guest
> > timer.
> 
> Just for my own curiosity, if there are two VM (VM1 and VM2). VM1 sets a timer1
> at <time> and exits, VM2 runs and sets a timer2 at <time+delta>.
> 
> The timer1 for VM1 is supposed to expire while VM2 is running. IIUC the virtual
> timer is under control of VM2 and will expire at <time+delta>.
> 
> Is the host wake up with the SW timer and switch in VM1 which in turn restores
> the timer and jump in the virtual timer irq handler?
>  
The thing that may be missing here is that a VCPU thread (more of which
in a collection is a VM) is just a thread from the point of view of
Linux, and whether or not a guest schedules a timer, should not effect
the scheduler's decision to run a given thread, if the thread is
runnable.

Whenever we run a VCPU thread, we look at its timer state (in software)
and calculate if the guest should see a timer interrupt and inject such
a one (the hardware arch timer is not involved in this process at all).

We use timers in exactly two scenarios:

 1. The hardware arch timers are used to force an exit to the host when
    the guest programmed the timer, so we can do the calculation in
    software I mentioned above and inject a virtual software-generated
    interrupt when the guest expects to see one.

 2. The guest goes to sleep (WFI) but has programmed a timer to be woken
    up at some point.  KVM handles a WFI by blocking the VCPU thread,
    which basically means making the thread interruptible and putting it
    on a waitqueue.  In this case we schedule a software timer to make
    the thread runnable again when the software timer fires (and the
    scheduler runs that thread when it wants to after that).

If you have a VCPU thread from VM1 blocked, and you run a VCPU thread
from VM2, then the VCPU thread from VM2 will program the hardware arch
timer with the context of the VM2 VCPU thread while running, and this
has nothing to do with the VCPU thread from VM1 at this point, because
it relies on the host Linux time keeping infrastructure to become
runnable some time in the future, and running a guest naturally doesn't
mess with the host's time keeping.

Hope this helps,
-Christoffer
Daniel Lezcano April 25, 2017, 12:51 p.m. UTC | #6
On Tue, Apr 25, 2017 at 11:21:21AM +0100, Marc Zyngier wrote:
> On 25/04/17 10:49, Daniel Lezcano wrote:
> > On Tue, Apr 25, 2017 at 10:10:12AM +0100, Marc Zyngier wrote:
> 
> [...]
> 
> >>> +static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
> >>> +{
> >>> +	/*
> >>> +	 * We don't need the measurement because the idle code already
> >>> +	 * knows the next expiry event.
> >>> +	 */
> >>> +	if (act->flags & __IRQF_TIMER)
> >>> +		return;
> >>
> >> And that's where this is really wrong for the KVM guest timer. As I
> >> said, this timer is under complete control of the guest, and the rest of
> >> the system doesn't know about it. KVM itself will only find out when the
> >> vcpu does a VM exit for a reason or another, and will just save/restore
> >> the state in order to be able to give the timer to another guest.
> >>
> >> The idle code is very much *not* aware of anything concerning that guest
> >> timer.
> > 
> > Just for my own curiosity, if there are two VM (VM1 and VM2). VM1 sets a timer1
> > at <time> and exits, VM2 runs and sets a timer2 at <time+delta>.
> > 
> > The timer1 for VM1 is supposed to expire while VM2 is running. IIUC the virtual
> > timer is under control of VM2 and will expire at <time+delta>.
> > 
> > Is the host wake up with the SW timer and switch in VM1 which in turn restores
> > the timer and jump in the virtual timer irq handler?
> 
> Indeed. The SW timer causes VM1 to wake-up, either on the same CPU
> (preempting VM2) or on another. The timer is then restored with the
> pending virtual interrupt injected, and the guest does what it has to
> with it.

Thanks for clarification.

So there is a virtual timer with real registers / interruption (waking up the
host) for the running VMs and SW timers for non-running VMs.

What is the benefit of having such mechanism instead of real timers injecting
interrupts in the VM without the virtual timer + save/restore? Efficiency in
the running VMs when setting up timers (saving privileges change overhead)?
Daniel Lezcano April 25, 2017, 12:52 p.m. UTC | #7
On Tue, Apr 25, 2017 at 12:22:30PM +0200, Christoffer Dall wrote:
> On Tue, Apr 25, 2017 at 11:49:27AM +0200, Daniel Lezcano wrote:
> 
> [...]
> 
> > > 
> > > The idle code is very much *not* aware of anything concerning that guest
> > > timer.
> > 
> > Just for my own curiosity, if there are two VM (VM1 and VM2). VM1 sets a timer1
> > at <time> and exits, VM2 runs and sets a timer2 at <time+delta>.
> > 
> > The timer1 for VM1 is supposed to expire while VM2 is running. IIUC the virtual
> > timer is under control of VM2 and will expire at <time+delta>.
> > 
> > Is the host wake up with the SW timer and switch in VM1 which in turn restores
> > the timer and jump in the virtual timer irq handler?
> >  
> The thing that may be missing here is that a VCPU thread (more of which
> in a collection is a VM) is just a thread from the point of view of
> Linux, and whether or not a guest schedules a timer, should not effect
> the scheduler's decision to run a given thread, if the thread is
> runnable.
> 
> Whenever we run a VCPU thread, we look at its timer state (in software)
> and calculate if the guest should see a timer interrupt and inject such
> a one (the hardware arch timer is not involved in this process at all).
> 
> We use timers in exactly two scenarios:
> 
>  1. The hardware arch timers are used to force an exit to the host when
>     the guest programmed the timer, so we can do the calculation in
>     software I mentioned above and inject a virtual software-generated
>     interrupt when the guest expects to see one.
> 
>  2. The guest goes to sleep (WFI) but has programmed a timer to be woken
>     up at some point.  KVM handles a WFI by blocking the VCPU thread,
>     which basically means making the thread interruptible and putting it
>     on a waitqueue.  In this case we schedule a software timer to make
>     the thread runnable again when the software timer fires (and the
>     scheduler runs that thread when it wants to after that).
> 
> If you have a VCPU thread from VM1 blocked, and you run a VCPU thread
> from VM2, then the VCPU thread from VM2 will program the hardware arch
> timer with the context of the VM2 VCPU thread while running, and this
> has nothing to do with the VCPU thread from VM1 at this point, because
> it relies on the host Linux time keeping infrastructure to become
> runnable some time in the future, and running a guest naturally doesn't
> mess with the host's time keeping.
> 
> Hope this helps,

Yes, definitively. Thanks for the detailed description.

  -- Daniel
Marc Zyngier April 25, 2017, 1:22 p.m. UTC | #8
On 25/04/17 13:51, Daniel Lezcano wrote:
> On Tue, Apr 25, 2017 at 11:21:21AM +0100, Marc Zyngier wrote:
>> On 25/04/17 10:49, Daniel Lezcano wrote:
>>> On Tue, Apr 25, 2017 at 10:10:12AM +0100, Marc Zyngier wrote:
>>
>> [...]
>>
>>>>> +static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
>>>>> +{
>>>>> +	/*
>>>>> +	 * We don't need the measurement because the idle code already
>>>>> +	 * knows the next expiry event.
>>>>> +	 */
>>>>> +	if (act->flags & __IRQF_TIMER)
>>>>> +		return;
>>>>
>>>> And that's where this is really wrong for the KVM guest timer. As I
>>>> said, this timer is under complete control of the guest, and the rest of
>>>> the system doesn't know about it. KVM itself will only find out when the
>>>> vcpu does a VM exit for a reason or another, and will just save/restore
>>>> the state in order to be able to give the timer to another guest.
>>>>
>>>> The idle code is very much *not* aware of anything concerning that guest
>>>> timer.
>>>
>>> Just for my own curiosity, if there are two VM (VM1 and VM2). VM1 sets a timer1
>>> at <time> and exits, VM2 runs and sets a timer2 at <time+delta>.
>>>
>>> The timer1 for VM1 is supposed to expire while VM2 is running. IIUC the virtual
>>> timer is under control of VM2 and will expire at <time+delta>.
>>>
>>> Is the host wake up with the SW timer and switch in VM1 which in turn restores
>>> the timer and jump in the virtual timer irq handler?
>>
>> Indeed. The SW timer causes VM1 to wake-up, either on the same CPU
>> (preempting VM2) or on another. The timer is then restored with the
>> pending virtual interrupt injected, and the guest does what it has to
>> with it.
> 
> Thanks for clarification.
> 
> So there is a virtual timer with real registers / interruption (waking up the
> host) for the running VMs and SW timers for non-running VMs.
> 
> What is the benefit of having such mechanism instead of real timers injecting
> interrupts in the VM without the virtual timer + save/restore? Efficiency in
> the running VMs when setting up timers (saving privileges change overhead)?


You can't dedicate HW resources to virtual CPUs. It just doesn't scale.
Also, injecting HW interrupts in a guest is pretty hard work, and for
multiple reasons:
  - the host needs to be in control of interrupt delivery (don't hog the
CPU with guest interrupts)
  - you want to be able to remap interrupts (id X on the host becomes id
Y on the guest),
  - you want to deal with migrating vcpus,
  - you want deliver an interrupt to a vcpu that is *not* running.

It *is* doable, but it is not cheap at all from a HW point of view.

	M.
Daniel Lezcano April 25, 2017, 1:53 p.m. UTC | #9
On 25/04/2017 15:22, Marc Zyngier wrote:
> On 25/04/17 13:51, Daniel Lezcano wrote:
>> On Tue, Apr 25, 2017 at 11:21:21AM +0100, Marc Zyngier wrote:
>>> On 25/04/17 10:49, Daniel Lezcano wrote:
>>>> On Tue, Apr 25, 2017 at 10:10:12AM +0100, Marc Zyngier wrote:
>>>
>>> [...]
>>>
>>>>>> +static inline void setup_timings(struct irq_desc *desc, struct irqaction *act)
>>>>>> +{
>>>>>> +	/*
>>>>>> +	 * We don't need the measurement because the idle code already
>>>>>> +	 * knows the next expiry event.
>>>>>> +	 */
>>>>>> +	if (act->flags & __IRQF_TIMER)
>>>>>> +		return;
>>>>>
>>>>> And that's where this is really wrong for the KVM guest timer. As I
>>>>> said, this timer is under complete control of the guest, and the rest of
>>>>> the system doesn't know about it. KVM itself will only find out when the
>>>>> vcpu does a VM exit for a reason or another, and will just save/restore
>>>>> the state in order to be able to give the timer to another guest.
>>>>>
>>>>> The idle code is very much *not* aware of anything concerning that guest
>>>>> timer.
>>>>
>>>> Just for my own curiosity, if there are two VM (VM1 and VM2). VM1 sets a timer1
>>>> at <time> and exits, VM2 runs and sets a timer2 at <time+delta>.
>>>>
>>>> The timer1 for VM1 is supposed to expire while VM2 is running. IIUC the virtual
>>>> timer is under control of VM2 and will expire at <time+delta>.
>>>>
>>>> Is the host wake up with the SW timer and switch in VM1 which in turn restores
>>>> the timer and jump in the virtual timer irq handler?
>>>
>>> Indeed. The SW timer causes VM1 to wake-up, either on the same CPU
>>> (preempting VM2) or on another. The timer is then restored with the
>>> pending virtual interrupt injected, and the guest does what it has to
>>> with it.
>>
>> Thanks for clarification.
>>
>> So there is a virtual timer with real registers / interruption (waking up the
>> host) for the running VMs and SW timers for non-running VMs.
>>
>> What is the benefit of having such mechanism instead of real timers injecting
>> interrupts in the VM without the virtual timer + save/restore? Efficiency in
>> the running VMs when setting up timers (saving privileges change overhead)?
> 
> 
> You can't dedicate HW resources to virtual CPUs. It just doesn't scale.
> Also, injecting HW interrupts in a guest is pretty hard work, and for
> multiple reasons:
>   - the host needs to be in control of interrupt delivery (don't hog the
> CPU with guest interrupts)
>   - you want to be able to remap interrupts (id X on the host becomes id
> Y on the guest),
>   - you want to deal with migrating vcpus,
>   - you want deliver an interrupt to a vcpu that is *not* running.
> 
> It *is* doable, but it is not cheap at all from a HW point of view.


Ok, I see.

Thanks!

  -- Daniel
diff mbox

Patch

diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
index 061ba7eed4ed..a4a81c6c7602 100644
--- a/kernel/irq/spurious.c
+++ b/kernel/irq/spurious.c
@@ -72,6 +72,7 @@  static int try_one_irq(struct irq_desc *desc, bool force)
 	 * marked polled are excluded from polling.
 	 */
 	if (irq_settings_is_per_cpu(desc) ||
+	    irq_settings_is_per_cpu_devid(desc) ||
 	    irq_settings_is_nested_thread(desc) ||
 	    irq_settings_is_polled(desc))
 		goto out;