diff mbox series

[2/2] accel/tcg: add assert() check in tb_invalidate_phys_page_range__locked()

Message ID 20230629082522.606219-3-mark.cave-ayland@ilande.co.uk
State New
Headers show
Series accel/tcg: fix page invalidation in tb_invalidate_phys_range() | expand

Commit Message

Mark Cave-Ayland June 29, 2023, 8:25 a.m. UTC
Add an assert() check in tb_invalidate_phys_page_range__locked() to ensure that
both the start and last addresses are within the same target page. Note that
due to performance concerns the check is only enabled when QEMU is configured
with --enable-debug-tcg.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
---
 accel/tcg/tb-maint.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Philippe Mathieu-Daudé June 29, 2023, 9:08 a.m. UTC | #1
On 29/6/23 10:25, Mark Cave-Ayland wrote:
> Add an assert() check in tb_invalidate_phys_page_range__locked() to ensure that
> both the start and last addresses are within the same target page. Note that
> due to performance concerns the check is only enabled when QEMU is configured
> with --enable-debug-tcg.
> 
> Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> ---
>   accel/tcg/tb-maint.c | 4 ++++
>   1 file changed, 4 insertions(+)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Michael Tokarev June 30, 2023, 8:05 a.m. UTC | #2
29.06.2023 11:25, Mark Cave-Ayland wrote:
> Add an assert() check in tb_invalidate_phys_page_range__locked() to ensure that
> both the start and last addresses are within the same target page. Note that
> due to performance concerns the check is only enabled when QEMU is configured
> with --enable-debug-tcg.

Performance concerns? That's two ANDs and on compare, - is it really that performance
critical?

I'm just asking, I dunno.

Thanks,

/mjt

> Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> ---
>   accel/tcg/tb-maint.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
> index 33ea1aadd1..8cd730dcb0 100644
> --- a/accel/tcg/tb-maint.c
> +++ b/accel/tcg/tb-maint.c
> @@ -1092,6 +1092,10 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
>       TranslationBlock *current_tb = retaddr ? tcg_tb_lookup(retaddr) : NULL;
>   #endif /* TARGET_HAS_PRECISE_SMC */
>   
> +#ifdef CONFIG_DEBUG_TCG
> +    assert((last & TARGET_PAGE_MASK) == (start & TARGET_PAGE_MASK));
> +#endif
> +
>       /*
>        * We remove all the TBs in the range [start, last].
>        * XXX: see if in some cases it could be faster to invalidate all the code
BALATON Zoltan June 30, 2023, 11:25 a.m. UTC | #3
On Fri, 30 Jun 2023, Michael Tokarev wrote:
> 29.06.2023 11:25, Mark Cave-Ayland wrote:
>> Add an assert() check in tb_invalidate_phys_page_range__locked() to ensure 
>> that
>> both the start and last addresses are within the same target page. Note 
>> that
>> due to performance concerns the check is only enabled when QEMU is 
>> configured
>> with --enable-debug-tcg.
>
> Performance concerns? That's two ANDs and on compare, - is it really that 
> performance
> critical?
>
> I'm just asking, I dunno.

If something is called frequently enough any small computaion can add up. 
In this case invalidating pages is probably a performance hit already and 
hopefully does not happen too often but then it's a good idea not to make 
it worse. As this is not something that should or could normally happen 
and only checks for programming errors I think it's good idea to only do 
it when debugging.

Regards,
BALATON Zoltan

> Thanks,
>
> /mjt
>
>> Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
>> ---
>>   accel/tcg/tb-maint.c | 4 ++++
>>   1 file changed, 4 insertions(+)
>> 
>> diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
>> index 33ea1aadd1..8cd730dcb0 100644
>> --- a/accel/tcg/tb-maint.c
>> +++ b/accel/tcg/tb-maint.c
>> @@ -1092,6 +1092,10 @@ tb_invalidate_phys_page_range__locked(struct 
>> page_collection *pages,
>>       TranslationBlock *current_tb = retaddr ? tcg_tb_lookup(retaddr) : 
>> NULL;
>>   #endif /* TARGET_HAS_PRECISE_SMC */
>>   +#ifdef CONFIG_DEBUG_TCG
>> +    assert((last & TARGET_PAGE_MASK) == (start & TARGET_PAGE_MASK));
>> +#endif
>> +
>>       /*
>>        * We remove all the TBs in the range [start, last].
>>        * XXX: see if in some cases it could be faster to invalidate all the 
>> code
>
>
>
diff mbox series

Patch

diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
index 33ea1aadd1..8cd730dcb0 100644
--- a/accel/tcg/tb-maint.c
+++ b/accel/tcg/tb-maint.c
@@ -1092,6 +1092,10 @@  tb_invalidate_phys_page_range__locked(struct page_collection *pages,
     TranslationBlock *current_tb = retaddr ? tcg_tb_lookup(retaddr) : NULL;
 #endif /* TARGET_HAS_PRECISE_SMC */
 
+#ifdef CONFIG_DEBUG_TCG
+    assert((last & TARGET_PAGE_MASK) == (start & TARGET_PAGE_MASK));
+#endif
+
     /*
      * We remove all the TBs in the range [start, last].
      * XXX: see if in some cases it could be faster to invalidate all the code