diff mbox series

[V3] lib: multiply the max_runtime if detect slow kconfigs

Message ID 20241212060448.204158-1-liwang@redhat.com
State Accepted
Headers show
Series [V3] lib: multiply the max_runtime if detect slow kconfigs | expand

Commit Message

Li Wang Dec. 12, 2024, 6:04 a.m. UTC
The method adjusts the max_runtime for test cases by multiplying
it by a factor (4x) if any slower kernel options are detected.
Debug kernel configurations (such as CONFIG_KASAN, CONFIG_PROVE_LOCKING, etc.)
are known to degrade performance, and this adjustment ensures
that tests do not fail prematurely due to timeouts.

As Cyril pointed out that a debug kernel will typically run
slower by a factor of N, and while determining the exact value
of N is challenging, so a reasonable upper bound is sufficient
for practical purposes.

Signed-off-by: Li Wang <liwang@redhat.com>
---
 include/tst_kconfig.h | 13 +++++++++++++
 lib/tst_kconfig.c     | 39 +++++++++++++++++++++++++++++++++++++++
 lib/tst_test.c        |  3 +++
 3 files changed, 55 insertions(+)

Comments

Petr Vorel Dec. 13, 2024, 10:40 p.m. UTC | #1
Hi Li,

> The method adjusts the max_runtime for test cases by multiplying
> it by a factor (4x) if any slower kernel options are detected.
> Debug kernel configurations (such as CONFIG_KASAN, CONFIG_PROVE_LOCKING, etc.)
> are known to degrade performance, and this adjustment ensures
> that tests do not fail prematurely due to timeouts.

> As Cyril pointed out that a debug kernel will typically run
> slower by a factor of N, and while determining the exact value
> of N is challenging, so a reasonable upper bound is sufficient
> for practical purposes.

> Signed-off-by: Li Wang <liwang@redhat.com>
> ---
>  include/tst_kconfig.h | 13 +++++++++++++
>  lib/tst_kconfig.c     | 39 +++++++++++++++++++++++++++++++++++++++
>  lib/tst_test.c        |  3 +++
>  3 files changed, 55 insertions(+)

> diff --git a/include/tst_kconfig.h b/include/tst_kconfig.h
> index 23f807409..291c34b11 100644
> --- a/include/tst_kconfig.h
> +++ b/include/tst_kconfig.h
> @@ -98,4 +98,17 @@ struct tst_kcmdline_var {
>   */
>  void tst_kcmdline_parse(struct tst_kcmdline_var params[], size_t params_len);

LGTM, few comments below
Reviewed-by: Petr Vorel <pvorel@suse.cz>


> +/*
> + * Check if any performance-degrading kernel configs are enabled.

Could you please before merge change this to:

/**
 * tst_has_slow_kconfig() - Check if any performance-degrading kernel configs are enabled.

To comply kernel doc formatting?
tst_kconfig.h has not been added to sphinx doc yet, but it would be nice to add
new code with proper formatting.

Kind regards,
Petr

> + *
> + * This function iterates over the list of slow kernel configuration options
> + * (`tst_slow_kconfigs`) and checks if any of them are enabled in the running kernel.
> + * These options are known to degrade system performance when enabled.
> + *
> + * Return:
> + * - 1 if at least one slow kernel config is enabled.
> + * - 0 if none of the slow kernel configs are enabled.
> + */
> +int tst_has_slow_kconfig(void);
> +
>  #endif	/* TST_KCONFIG_H__ */
> diff --git a/lib/tst_kconfig.c b/lib/tst_kconfig.c
> index 6d6b1da18..92c27cb35 100644
> --- a/lib/tst_kconfig.c
> +++ b/lib/tst_kconfig.c
> @@ -631,3 +631,42 @@ void tst_kcmdline_parse(struct tst_kcmdline_var params[], size_t params_len)

>  	SAFE_FCLOSE(f);
>  }
> +
> +/*
> + * List of kernel config options that may degrade performance when enabled.
> + */
> +static struct tst_kconfig_var slow_kconfigs[] = {
> +	TST_KCONFIG_INIT("CONFIG_PROVE_LOCKING"),
> +	TST_KCONFIG_INIT("CONFIG_LOCKDEP"),
> +	TST_KCONFIG_INIT("CONFIG_DEBUG_SPINLOCK"),
> +	TST_KCONFIG_INIT("CONFIG_DEBUG_RT_MUTEXES"),
> +	TST_KCONFIG_INIT("CONFIG_DEBUG_MUTEXES"),
> +	TST_KCONFIG_INIT("CONFIG_DEBUG_PAGEALLOC"),
Does CONFIG_DEBUG_PAGEALLOC itself prolong the run? Isn't it that only when
debug_guardpage_minorder=... or debug_pagealloc=... is set?

https://www.kernel.org/doc/html/v5.2/admin-guide/kernel-parameters.html

I would need to run the test with these to see the difference.


> +	TST_KCONFIG_INIT("CONFIG_KASAN"),
> +	TST_KCONFIG_INIT("CONFIG_SLUB_RCU_DEBUG"),
> +	TST_KCONFIG_INIT("CONFIG_TRACE_IRQFLAGS"),
> +	TST_KCONFIG_INIT("CONFIG_LATENCYTOP"),
> +	TST_KCONFIG_INIT("CONFIG_DEBUG_NET"),
> +	TST_KCONFIG_INIT("CONFIG_EXT4_DEBUG"),
> +	TST_KCONFIG_INIT("CONFIG_QUOTA_DEBUG"),
> +	TST_KCONFIG_INIT("CONFIG_FAULT_INJECTION"),
> +	TST_KCONFIG_INIT("CONFIG_DEBUG_OBJECTS")
> +};
> +
> +int tst_has_slow_kconfig(void)
> +{
> +	unsigned int i;
> +
> +	tst_kconfig_read(slow_kconfigs, ARRAY_SIZE(slow_kconfigs));
> +
Maybe here TINFO message "checking for options which slow the execution?
Or print it (once) only if option detected? Because it's not obvious why we are
detecting it. Or after searching print what we did (4x prolonged runtime).

> +	for (i = 0; i < ARRAY_SIZE(slow_kconfigs); i++) {
> +		if (slow_kconfigs[i].choice == 'y') {
> +			tst_res(TINFO,
> +				"%s kernel option detected",
> +				slow_kconfigs[i].id);
> +			return 1;
> +		}
> +	}
> +
> +	return 0;
> +}
> diff --git a/lib/tst_test.c b/lib/tst_test.c
> index 8db554dea..f4e667240 100644
> --- a/lib/tst_test.c
> +++ b/lib/tst_test.c
> @@ -555,6 +555,9 @@ static int multiply_runtime(int max_runtime)

>  	parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);

> +	if (tst_has_slow_kconfig())
> +		max_runtime *= 4;
Maybe note here what we do? (TINFO)

Kind regards,
Petr
> +
>  	return max_runtime * runtime_mul;
>  }
Li Wang Dec. 16, 2024, 9:37 a.m. UTC | #2
On Sat, Dec 14, 2024 at 6:40 AM Petr Vorel <pvorel@suse.cz> wrote:

> Hi Li,
>
> > The method adjusts the max_runtime for test cases by multiplying
> > it by a factor (4x) if any slower kernel options are detected.
> > Debug kernel configurations (such as CONFIG_KASAN, CONFIG_PROVE_LOCKING,
> etc.)
> > are known to degrade performance, and this adjustment ensures
> > that tests do not fail prematurely due to timeouts.
>
> > As Cyril pointed out that a debug kernel will typically run
> > slower by a factor of N, and while determining the exact value
> > of N is challenging, so a reasonable upper bound is sufficient
> > for practical purposes.
>
> > Signed-off-by: Li Wang <liwang@redhat.com>
> > ---
> >  include/tst_kconfig.h | 13 +++++++++++++
> >  lib/tst_kconfig.c     | 39 +++++++++++++++++++++++++++++++++++++++
> >  lib/tst_test.c        |  3 +++
> >  3 files changed, 55 insertions(+)
>
> > diff --git a/include/tst_kconfig.h b/include/tst_kconfig.h
> > index 23f807409..291c34b11 100644
> > --- a/include/tst_kconfig.h
> > +++ b/include/tst_kconfig.h
> > @@ -98,4 +98,17 @@ struct tst_kcmdline_var {
> >   */
> >  void tst_kcmdline_parse(struct tst_kcmdline_var params[], size_t
> params_len);
>
> LGTM, few comments below
> Reviewed-by: Petr Vorel <pvorel@suse.cz>
>
>
> > +/*
> > + * Check if any performance-degrading kernel configs are enabled.
>
> Could you please before merge change this to:
>
> /**
>  * tst_has_slow_kconfig() - Check if any performance-degrading kernel
> configs are enabled.
>
> To comply kernel doc formatting?
> tst_kconfig.h has not been added to sphinx doc yet, but it would be nice
> to add
> new code with proper formatting.
>
> Kind regards,
> Petr
>
> > + *
> > + * This function iterates over the list of slow kernel configuration
> options
> > + * (`tst_slow_kconfigs`) and checks if any of them are enabled in the
> running kernel.
> > + * These options are known to degrade system performance when enabled.
> > + *
> > + * Return:
> > + * - 1 if at least one slow kernel config is enabled.
> > + * - 0 if none of the slow kernel configs are enabled.
> > + */
> > +int tst_has_slow_kconfig(void);
> > +
> >  #endif       /* TST_KCONFIG_H__ */
> > diff --git a/lib/tst_kconfig.c b/lib/tst_kconfig.c
> > index 6d6b1da18..92c27cb35 100644
> > --- a/lib/tst_kconfig.c
> > +++ b/lib/tst_kconfig.c
> > @@ -631,3 +631,42 @@ void tst_kcmdline_parse(struct tst_kcmdline_var
> params[], size_t params_len)
>
> >       SAFE_FCLOSE(f);
> >  }
> > +
> > +/*
> > + * List of kernel config options that may degrade performance when
> enabled.
> > + */
> > +static struct tst_kconfig_var slow_kconfigs[] = {
> > +     TST_KCONFIG_INIT("CONFIG_PROVE_LOCKING"),
> > +     TST_KCONFIG_INIT("CONFIG_LOCKDEP"),
> > +     TST_KCONFIG_INIT("CONFIG_DEBUG_SPINLOCK"),
> > +     TST_KCONFIG_INIT("CONFIG_DEBUG_RT_MUTEXES"),
> > +     TST_KCONFIG_INIT("CONFIG_DEBUG_MUTEXES"),
> > +     TST_KCONFIG_INIT("CONFIG_DEBUG_PAGEALLOC"),
> Does CONFIG_DEBUG_PAGEALLOC itself prolong the run? Isn't it that only when
> debug_guardpage_minorder=... or debug_pagealloc=... is set?
>

Good catch.

I guess that won't impact the kernel performance if not set any
of the parameters, because from the doc it is disabled by default.

  "When CONFIG_DEBUG_PAGEALLOC is set, this parameter
  enables the feature at boot time. In default, it is disabled.
  ....
  if we don't enable it at boot time and the the system will work
  mostly same with the kernel built without CONFIG_DEBUG_PAGEALLOC."

So I would like to remove CONFIG_DEBUG_PAGEALLOC from
the detecting.



> https://www.kernel.org/doc/html/v5.2/admin-guide/kernel-parameters.html
>
> I would need to run the test with these to see the difference.
>

Any new found?



>
>
> > +     TST_KCONFIG_INIT("CONFIG_KASAN"),
> > +     TST_KCONFIG_INIT("CONFIG_SLUB_RCU_DEBUG"),
> > +     TST_KCONFIG_INIT("CONFIG_TRACE_IRQFLAGS"),
> > +     TST_KCONFIG_INIT("CONFIG_LATENCYTOP"),
> > +     TST_KCONFIG_INIT("CONFIG_DEBUG_NET"),
> > +     TST_KCONFIG_INIT("CONFIG_EXT4_DEBUG"),
> > +     TST_KCONFIG_INIT("CONFIG_QUOTA_DEBUG"),
> > +     TST_KCONFIG_INIT("CONFIG_FAULT_INJECTION"),
> > +     TST_KCONFIG_INIT("CONFIG_DEBUG_OBJECTS")
> > +};
> > +
> > +int tst_has_slow_kconfig(void)
> > +{
> > +     unsigned int i;
> > +
> > +     tst_kconfig_read(slow_kconfigs, ARRAY_SIZE(slow_kconfigs));
> > +
> Maybe here TINFO message "checking for options which slow the execution?
> Or print it (once) only if option detected? Because it's not obvious why
> we are
> detecting it. Or after searching print what we did (4x prolonged runtime).
>

Agree, the rest comments all look good.



>
> > +     for (i = 0; i < ARRAY_SIZE(slow_kconfigs); i++) {
> > +             if (slow_kconfigs[i].choice == 'y') {
> > +                     tst_res(TINFO,
> > +                             "%s kernel option detected",
> > +                             slow_kconfigs[i].id);
> > +                     return 1;
> > +             }
> > +     }
> > +
> > +     return 0;
> > +}
> > diff --git a/lib/tst_test.c b/lib/tst_test.c
> > index 8db554dea..f4e667240 100644
> > --- a/lib/tst_test.c
> > +++ b/lib/tst_test.c
> > @@ -555,6 +555,9 @@ static int multiply_runtime(int max_runtime)
>
> >       parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);
>
> > +     if (tst_has_slow_kconfig())
> > +             max_runtime *= 4;
> Maybe note here what we do? (TINFO)
>
> Kind regards,
> Petr
> > +
> >       return max_runtime * runtime_mul;
> >  }
>
>
Petr Vorel Dec. 16, 2024, 12:28 p.m. UTC | #3
Hi Li,

...
> > > +/*
> > > + * List of kernel config options that may degrade performance when
> > enabled.
> > > + */
> > > +static struct tst_kconfig_var slow_kconfigs[] = {
> > > +     TST_KCONFIG_INIT("CONFIG_PROVE_LOCKING"),
> > > +     TST_KCONFIG_INIT("CONFIG_LOCKDEP"),
> > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_SPINLOCK"),
> > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_RT_MUTEXES"),
> > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_MUTEXES"),
> > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_PAGEALLOC"),
> > Does CONFIG_DEBUG_PAGEALLOC itself prolong the run? Isn't it that only when
> > debug_guardpage_minorder=... or debug_pagealloc=... is set?

> Good catch.

> I guess that won't impact the kernel performance if not set any
> of the parameters, because from the doc it is disabled by default.

>   "When CONFIG_DEBUG_PAGEALLOC is set, this parameter
>   enables the feature at boot time. In default, it is disabled.
>   ....
>   if we don't enable it at boot time and the the system will work
>   mostly same with the kernel built without CONFIG_DEBUG_PAGEALLOC."

> So I would like to remove CONFIG_DEBUG_PAGEALLOC from
> the detecting.

Or maybe to detect if debug_pagealloc kernel cmdline is set with tst_kcmdline_parse()?

OTOH we run with debug_pagealloc=on only syscalls and some long running tests
(e.g. bind06) are even slightly faster than when running without it. But that
may be affected by QEMU host. Therefore let's skip CONFIG_DEBUG_PAGEALLOC until
I find a time to test how it affects the runtime.

> > https://www.kernel.org/doc/html/v5.2/admin-guide/kernel-parameters.html

> > I would need to run the test with these to see the difference.


> Any new found?

I'm sorry I haven't tested yet. Feel free to not to wait and merge. I'll try to
do it soon.

Kind regards,
Petr


> > > +     TST_KCONFIG_INIT("CONFIG_KASAN"),
> > > +     TST_KCONFIG_INIT("CONFIG_SLUB_RCU_DEBUG"),
> > > +     TST_KCONFIG_INIT("CONFIG_TRACE_IRQFLAGS"),
> > > +     TST_KCONFIG_INIT("CONFIG_LATENCYTOP"),
> > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_NET"),
> > > +     TST_KCONFIG_INIT("CONFIG_EXT4_DEBUG"),
> > > +     TST_KCONFIG_INIT("CONFIG_QUOTA_DEBUG"),
> > > +     TST_KCONFIG_INIT("CONFIG_FAULT_INJECTION"),
> > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_OBJECTS")
> > > +};
> > > +
> > > +int tst_has_slow_kconfig(void)
> > > +{
> > > +     unsigned int i;
> > > +
> > > +     tst_kconfig_read(slow_kconfigs, ARRAY_SIZE(slow_kconfigs));
> > > +
> > Maybe here TINFO message "checking for options which slow the execution?
> > Or print it (once) only if option detected? Because it's not obvious why
> > we are
> > detecting it. Or after searching print what we did (4x prolonged runtime).


> Agree, the rest comments all look good.

+1

Kind regards,
Petr
Cyril Hrubis Dec. 16, 2024, 12:51 p.m. UTC | #4
Hi!
Generally looks good now.

It would be better if the newly added function had a proper
documentation comment as Peter pointed out.

So as long as you fix the minor issues pointed you by Peter you can add
my:

Reviewed-by: Cyril Hrubis <chrubis@suse.cz>
Cyril Hrubis Dec. 16, 2024, 1 p.m. UTC | #5
Hi!
> > +	TST_KCONFIG_INIT("CONFIG_KASAN"),
> > +	TST_KCONFIG_INIT("CONFIG_SLUB_RCU_DEBUG"),
> > +	TST_KCONFIG_INIT("CONFIG_TRACE_IRQFLAGS"),
> > +	TST_KCONFIG_INIT("CONFIG_LATENCYTOP"),
> > +	TST_KCONFIG_INIT("CONFIG_DEBUG_NET"),
> > +	TST_KCONFIG_INIT("CONFIG_EXT4_DEBUG"),
> > +	TST_KCONFIG_INIT("CONFIG_QUOTA_DEBUG"),
> > +	TST_KCONFIG_INIT("CONFIG_FAULT_INJECTION"),
> > +	TST_KCONFIG_INIT("CONFIG_DEBUG_OBJECTS")
> > +};
> > +
> > +int tst_has_slow_kconfig(void)
> > +{
> > +	unsigned int i;
> > +
> > +	tst_kconfig_read(slow_kconfigs, ARRAY_SIZE(slow_kconfigs));
> > +
> Maybe here TINFO message "checking for options which slow the execution?
> Or print it (once) only if option detected? Because it's not obvious why we are
> detecting it. Or after searching print what we did (4x prolonged runtime).
>
> > +	for (i = 0; i < ARRAY_SIZE(slow_kconfigs); i++) {
> > +		if (slow_kconfigs[i].choice == 'y') {
> > +			tst_res(TINFO,
> > +				"%s kernel option detected",
> > +				slow_kconfigs[i].id);
> > +			return 1;
> > +		}
> > +	}
> > +
> > +	return 0;
> > +}
> > diff --git a/lib/tst_test.c b/lib/tst_test.c
> > index 8db554dea..f4e667240 100644
> > --- a/lib/tst_test.c
> > +++ b/lib/tst_test.c
> > @@ -555,6 +555,9 @@ static int multiply_runtime(int max_runtime)
> 
> >  	parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);
> 
> > +	if (tst_has_slow_kconfig())
> > +		max_runtime *= 4;
> Maybe note here what we do? (TINFO)

That really depends on how verbose we want to be, we already print the
overall test timeout which is timeout + runtime. So it's somehow visible
in the test runtime has been increased. Maybe we should make the info
message in set_timeout() better by printing the runtime separately there
if non-zero.
Petr Vorel Dec. 16, 2024, 5:29 p.m. UTC | #6
Hi all,

...
> > > +	if (tst_has_slow_kconfig())
> > > +		max_runtime *= 4;
> > Maybe note here what we do? (TINFO)

> That really depends on how verbose we want to be, we already print the
> overall test timeout which is timeout + runtime. So it's somehow visible
> in the test runtime has been increased. Maybe we should make the info
> message in set_timeout() better by printing the runtime separately there
> if non-zero.

Sounds useful to me.

Kind regards,
Petr
Li Wang Dec. 17, 2024, 2:40 a.m. UTC | #7
On Mon, Dec 16, 2024 at 8:29 PM Petr Vorel <pvorel@suse.cz> wrote:

> Hi Li,
>
> ...
> > > > +/*
> > > > + * List of kernel config options that may degrade performance when
> > > enabled.
> > > > + */
> > > > +static struct tst_kconfig_var slow_kconfigs[] = {
> > > > +     TST_KCONFIG_INIT("CONFIG_PROVE_LOCKING"),
> > > > +     TST_KCONFIG_INIT("CONFIG_LOCKDEP"),
> > > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_SPINLOCK"),
> > > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_RT_MUTEXES"),
> > > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_MUTEXES"),
> > > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_PAGEALLOC"),
> > > Does CONFIG_DEBUG_PAGEALLOC itself prolong the run? Isn't it that only
> when
> > > debug_guardpage_minorder=... or debug_pagealloc=... is set?
>
> > Good catch.
>
> > I guess that won't impact the kernel performance if not set any
> > of the parameters, because from the doc it is disabled by default.
>
> >   "When CONFIG_DEBUG_PAGEALLOC is set, this parameter
> >   enables the feature at boot time. In default, it is disabled.
> >   ....
> >   if we don't enable it at boot time and the the system will work
> >   mostly same with the kernel built without CONFIG_DEBUG_PAGEALLOC."
>
> > So I would like to remove CONFIG_DEBUG_PAGEALLOC from
> > the detecting.
>
> Or maybe to detect if debug_pagealloc kernel cmdline is set with
> tst_kcmdline_parse()?
>

Ok, later we can do that in a separate patch by adding a slow_params[].
Using another function like tst_has_slow_param() should be similar with
this method just detecting the slow kcmdlines.



>
> OTOH we run with debug_pagealloc=on only syscalls and some long running
> tests
> (e.g. bind06) are even slightly faster than when running without it. But
> that
> may be affected by QEMU host. Therefore let's skip CONFIG_DEBUG_PAGEALLOC
> until
> I find a time to test how it affects the runtime.
>

Thanks, I don't think it's helpful to detect configurations that are not
always enabled. We typically test the general or debug kernel in the
productive environment but do not often test with specific debug
kernel configures.

But I keep an open mind if we need to add more fine-grained testing
controls in the future.



>
> > >
> https://www.kernel.org/doc/html/v5.2/admin-guide/kernel-parameters.html
>
> > > I would need to run the test with these to see the difference.
>
>
> > Any new found?
>
> I'm sorry I haven't tested yet. Feel free to not to wait and merge. I'll
> try to
> do it soon.
>
> Kind regards,
> Petr
>
>
> > > > +     TST_KCONFIG_INIT("CONFIG_KASAN"),
> > > > +     TST_KCONFIG_INIT("CONFIG_SLUB_RCU_DEBUG"),
> > > > +     TST_KCONFIG_INIT("CONFIG_TRACE_IRQFLAGS"),
> > > > +     TST_KCONFIG_INIT("CONFIG_LATENCYTOP"),
> > > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_NET"),
> > > > +     TST_KCONFIG_INIT("CONFIG_EXT4_DEBUG"),
> > > > +     TST_KCONFIG_INIT("CONFIG_QUOTA_DEBUG"),
> > > > +     TST_KCONFIG_INIT("CONFIG_FAULT_INJECTION"),
> > > > +     TST_KCONFIG_INIT("CONFIG_DEBUG_OBJECTS")
> > > > +};
> > > > +
> > > > +int tst_has_slow_kconfig(void)
> > > > +{
> > > > +     unsigned int i;
> > > > +
> > > > +     tst_kconfig_read(slow_kconfigs, ARRAY_SIZE(slow_kconfigs));
> > > > +
> > > Maybe here TINFO message "checking for options which slow the
> execution?
> > > Or print it (once) only if option detected? Because it's not obvious
> why
> > > we are
> > > detecting it. Or after searching print what we did (4x prolonged
> runtime).
>
>
> > Agree, the rest comments all look good.
>
> +1
>
> Kind regards,
> Petr
>
>
Li Wang Dec. 17, 2024, 3:46 a.m. UTC | #8
On Mon, Dec 16, 2024 at 9:00 PM Cyril Hrubis <chrubis@suse.cz> wrote:

> Hi!
> > > +   TST_KCONFIG_INIT("CONFIG_KASAN"),
> > > +   TST_KCONFIG_INIT("CONFIG_SLUB_RCU_DEBUG"),
> > > +   TST_KCONFIG_INIT("CONFIG_TRACE_IRQFLAGS"),
> > > +   TST_KCONFIG_INIT("CONFIG_LATENCYTOP"),
> > > +   TST_KCONFIG_INIT("CONFIG_DEBUG_NET"),
> > > +   TST_KCONFIG_INIT("CONFIG_EXT4_DEBUG"),
> > > +   TST_KCONFIG_INIT("CONFIG_QUOTA_DEBUG"),
> > > +   TST_KCONFIG_INIT("CONFIG_FAULT_INJECTION"),
> > > +   TST_KCONFIG_INIT("CONFIG_DEBUG_OBJECTS")
> > > +};
> > > +
> > > +int tst_has_slow_kconfig(void)
> > > +{
> > > +   unsigned int i;
> > > +
> > > +   tst_kconfig_read(slow_kconfigs, ARRAY_SIZE(slow_kconfigs));
> > > +
> > Maybe here TINFO message "checking for options which slow the execution?
> > Or print it (once) only if option detected? Because it's not obvious why
> we are
> > detecting it. Or after searching print what we did (4x prolonged
> runtime).
> >
> > > +   for (i = 0; i < ARRAY_SIZE(slow_kconfigs); i++) {
> > > +           if (slow_kconfigs[i].choice == 'y') {
> > > +                   tst_res(TINFO,
> > > +                           "%s kernel option detected",
> > > +                           slow_kconfigs[i].id);
> > > +                   return 1;
> > > +           }
> > > +   }
> > > +
> > > +   return 0;
> > > +}
> > > diff --git a/lib/tst_test.c b/lib/tst_test.c
> > > index 8db554dea..f4e667240 100644
> > > --- a/lib/tst_test.c
> > > +++ b/lib/tst_test.c
> > > @@ -555,6 +555,9 @@ static int multiply_runtime(int max_runtime)
> >
> > >     parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);
> >
> > > +   if (tst_has_slow_kconfig())
> > > +           max_runtime *= 4;
> > Maybe note here what we do? (TINFO)
>
> That really depends on how verbose we want to be, we already print the
> overall test timeout which is timeout + runtime. So it's somehow visible
> in the test runtime has been increased. Maybe we should make the info
> message in set_timeout() better by printing the runtime separately there
> if non-zero.
>

You mean we add one line print for the max_runtime separately?
That might be so verbose that sometimes the test invokes
tst_set_max_runtime()it will print twice in the logs.

# ./starvation
tst_tmpdir.c:316: TINFO: Using /tmp/LTP_staVoNT4a as tmpdir (xfs filesystem)
tst_test.c:1896: TINFO: LTP version: 20240930
tst_test.c:1900: TINFO: Tested kernel:
5.14.0-539.5812_1580643863.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Dec 9
18:03:26 UTC 2024 x86_64
tst_test.c:1731: TINFO: Timeout per run is 0h 00m 30s
starvation.c:98: TINFO: Setting affinity to CPU 0
starvation.c:52: TINFO: CPU did 120000000 loops in 75292us
tst_kconfig.c:88: TINFO: Parsing kernel config
'/lib/modules/5.14.0-539.5812_1580643863.el9.x86_64/build/.config'
tst_test.c:1739: TINFO: Updating max runtime to 0h 01m 15s
tst_test.c:1729: TINFO: Max_runtime is set to 75 seconds
tst_test.c:1731: TINFO: Timeout per run is 0h 01m 45s
starvation.c:150: TPASS: Haven't reproduced scheduller starvation.

Summary:
passed   1
failed   0
broken   0
skipped  0
warnings 0
Li Wang Dec. 18, 2024, 3:23 a.m. UTC | #9
I only pushed the patch + Petr's suggestion. Let's see how it goes in the
coming test.
Petr Vorel Dec. 19, 2024, 12:53 p.m. UTC | #10
Hi Li,

...
> +++ b/lib/tst_test.c
> @@ -555,6 +555,9 @@ static int multiply_runtime(int max_runtime)

>  	parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);

> +	if (tst_has_slow_kconfig())
> +		max_runtime *= 4;

FYI this change prolongs some fuzzy sync tests, e.g. setsockopt06 or writev03.
I guess this is a side effect, or not? Or does slow machine really needs to run
longer in order to trigger bug in fuzzy sync?

We have 900 sec timeout in openQA (default LTP timeout is 600 sec), but it's not enough.
Sure, the solution is to increase it to 2400 (4*600), but then we need really to
have more precise .max_runtime setup otherwise tests which got stuck will
prolong testing 4x times.

This is for syscalls, I haven't checked other runtests (specially these which
have high .max_runtime, e.g. ltp-aio-stress).

Kind regards,
Petr
Martin Doucha Dec. 19, 2024, 12:57 p.m. UTC | #11
Hi!

On 12. 12. 24 7:04, Li Wang wrote:
> The method adjusts the max_runtime for test cases by multiplying
> it by a factor (4x) if any slower kernel options are detected.
> Debug kernel configurations (such as CONFIG_KASAN, CONFIG_PROVE_LOCKING, etc.)
> are known to degrade performance, and this adjustment ensures
> that tests do not fail prematurely due to timeouts.
> 
> As Cyril pointed out that a debug kernel will typically run
> slower by a factor of N, and while determining the exact value
> of N is challenging, so a reasonable upper bound is sufficient
> for practical purposes.
> 
> Signed-off-by: Li Wang <liwang@redhat.com>
> ---
>   include/tst_kconfig.h | 13 +++++++++++++
>   lib/tst_kconfig.c     | 39 +++++++++++++++++++++++++++++++++++++++
>   lib/tst_test.c        |  3 +++
>   3 files changed, 55 insertions(+)
> 
> <snip>
 >
> diff --git a/lib/tst_test.c b/lib/tst_test.c
> index 8db554dea..f4e667240 100644
> --- a/lib/tst_test.c
> +++ b/lib/tst_test.c
> @@ -555,6 +555,9 @@ static int multiply_runtime(int max_runtime)
>   
>   	parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);
>   
> +	if (tst_has_slow_kconfig())
> +		max_runtime *= 4;
> +
>   	return max_runtime * runtime_mul;
>   }
>   

We have plenty of tests which keep looping until they run out of runtime 
and then automatically stop. These tests are not at risk of timing out 
and this patch only makes them run 3 times longer than necessary.

I'd recommend temporarily reverting this patch and adding it back with a 
new tst_test flag to identify tests which exit when runtime expires.
Li Wang Dec. 19, 2024, 1:07 p.m. UTC | #12
On Thu, Dec 19, 2024 at 8:54 PM Petr Vorel <pvorel@suse.cz> wrote:

> Hi Li,
>
> ...
> > +++ b/lib/tst_test.c
> > @@ -555,6 +555,9 @@ static int multiply_runtime(int max_runtime)
>
> >       parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);
>
> > +     if (tst_has_slow_kconfig())
> > +             max_runtime *= 4;
>
> FYI this change prolongs some fuzzy sync tests, e.g. setsockopt06 or
> writev03.
> I guess this is a side effect, or not? Or does slow machine really needs
> to run
> longer in order to trigger bug in fuzzy sync?
>

Yes, that will prolong the fuzzy test on debug-kernel but fortunately fuzzy
has
'pair->exec_loops' to control the total test numbers. It wouldn't prolong
too
long I guess.

For the test which keeps looping until they run out of runtime (Martin
pointed)
we need to come up with a way to limit the runtime. I need to go though and
see
how many they are.



>
> We have 900 sec timeout in openQA (default LTP timeout is 600 sec), but
> it's not enough.
> Sure, the solution is to increase it to 2400 (4*600), but then we need
> really to
> have more precise .max_runtime setup otherwise tests which got stuck will
> prolong testing 4x times.
>
> This is for syscalls, I haven't checked other runtests (specially these
> which
> have high .max_runtime, e.g. ltp-aio-stress).
>
> Kind regards,
> Petr
>
>
Petr Vorel Dec. 19, 2024, 1:07 p.m. UTC | #13
> Hi!

> On 12. 12. 24 7:04, Li Wang wrote:
> > The method adjusts the max_runtime for test cases by multiplying
> > it by a factor (4x) if any slower kernel options are detected.
> > Debug kernel configurations (such as CONFIG_KASAN, CONFIG_PROVE_LOCKING, etc.)
> > are known to degrade performance, and this adjustment ensures
> > that tests do not fail prematurely due to timeouts.

> > As Cyril pointed out that a debug kernel will typically run
> > slower by a factor of N, and while determining the exact value
> > of N is challenging, so a reasonable upper bound is sufficient
> > for practical purposes.

> > Signed-off-by: Li Wang <liwang@redhat.com>
> > ---
> >   include/tst_kconfig.h | 13 +++++++++++++
> >   lib/tst_kconfig.c     | 39 +++++++++++++++++++++++++++++++++++++++
> >   lib/tst_test.c        |  3 +++
> >   3 files changed, 55 insertions(+)

> > <snip>

> > diff --git a/lib/tst_test.c b/lib/tst_test.c
> > index 8db554dea..f4e667240 100644
> > --- a/lib/tst_test.c
> > +++ b/lib/tst_test.c
> > @@ -555,6 +555,9 @@ static int multiply_runtime(int max_runtime)
> >   	parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);
> > +	if (tst_has_slow_kconfig())
> > +		max_runtime *= 4;
> > +
> >   	return max_runtime * runtime_mul;
> >   }

> We have plenty of tests which keep looping until they run out of runtime and
> then automatically stop. These tests are not at risk of timing out and this
> patch only makes them run 3 times longer than necessary.

> I'd recommend temporarily reverting this patch and adding it back with a new
> tst_test flag to identify tests which exit when runtime expires.

+1. Li, could you please do it?

Kind regards,
Petr
Cyril Hrubis Dec. 19, 2024, 1:10 p.m. UTC | #14
Hi!
> We have plenty of tests which keep looping until they run out of runtime 
> and then automatically stop. These tests are not at risk of timing out 
> and this patch only makes them run 3 times longer than necessary.
> 
> I'd recommend temporarily reverting this patch and adding it back with a 
> new tst_test flag to identify tests which exit when runtime expires.

Sounds like we need to have two different runtime values in the tests,
max_runtime which is the timeout for the test and just runtime which is
for how long will the test loop.

And for the test that loop for a given runtime the max_runtime would be
automatically set to the runtime in the test library.
Li Wang Dec. 19, 2024, 1:12 p.m. UTC | #15
On Thu, Dec 19, 2024 at 9:07 PM Petr Vorel <pvorel@suse.cz> wrote:

> > Hi!
>
> > On 12. 12. 24 7:04, Li Wang wrote:
> > > The method adjusts the max_runtime for test cases by multiplying
> > > it by a factor (4x) if any slower kernel options are detected.
> > > Debug kernel configurations (such as CONFIG_KASAN,
> CONFIG_PROVE_LOCKING, etc.)
> > > are known to degrade performance, and this adjustment ensures
> > > that tests do not fail prematurely due to timeouts.
>
> > > As Cyril pointed out that a debug kernel will typically run
> > > slower by a factor of N, and while determining the exact value
> > > of N is challenging, so a reasonable upper bound is sufficient
> > > for practical purposes.
>
> > > Signed-off-by: Li Wang <liwang@redhat.com>
> > > ---
> > >   include/tst_kconfig.h | 13 +++++++++++++
> > >   lib/tst_kconfig.c     | 39 +++++++++++++++++++++++++++++++++++++++
> > >   lib/tst_test.c        |  3 +++
> > >   3 files changed, 55 insertions(+)
>
> > > <snip>
>
> > > diff --git a/lib/tst_test.c b/lib/tst_test.c
> > > index 8db554dea..f4e667240 100644
> > > --- a/lib/tst_test.c
> > > +++ b/lib/tst_test.c
> > > @@ -555,6 +555,9 @@ static int multiply_runtime(int max_runtime)
> > >     parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);
> > > +   if (tst_has_slow_kconfig())
> > > +           max_runtime *= 4;
> > > +
> > >     return max_runtime * runtime_mul;
> > >   }
>
> > We have plenty of tests which keep looping until they run out of runtime
> and
> > then automatically stop. These tests are not at risk of timing out and
> this
> > patch only makes them run 3 times longer than necessary.
>

We can bypass the multiply when recognizing those test.



>
> > I'd recommend temporarily reverting this patch and adding it back with a
> new
> > tst_test flag to identify tests which exit when runtime expires.
>
> +1. Li, could you please do it?
>

Sure, but I am wondering do you running the latest LTP branch in productive
env?
If so I am sorry to make the test time prolong.
Or if not, could you give me a period to resolve it in a new patch?
Li Wang Dec. 19, 2024, 1:14 p.m. UTC | #16
On Thu, Dec 19, 2024 at 9:10 PM Cyril Hrubis <chrubis@suse.cz> wrote:

> Hi!
> > We have plenty of tests which keep looping until they run out of runtime
> > and then automatically stop. These tests are not at risk of timing out
> > and this patch only makes them run 3 times longer than necessary.
> >
> > I'd recommend temporarily reverting this patch and adding it back with a
> > new tst_test flag to identify tests which exit when runtime expires.
>
> Sounds like we need to have two different runtime values in the tests,
> max_runtime which is the timeout for the test and just runtime which is
> for how long will the test loop.
>

Agree, if we define the looping runtime in a separate value, that would be
helpful.


> And for the test that loop for a given runtime the max_runtime would be
> automatically set to the runtime in the test library.
>
> --
> Cyril Hrubis
> chrubis@suse.cz
>
>
Cyril Hrubis Dec. 19, 2024, 1:19 p.m. UTC | #17
Hi!
> > Sounds like we need to have two different runtime values in the tests,
> > max_runtime which is the timeout for the test and just runtime which is
> > for how long will the test loop.
> >
> 
> Agree, if we define the looping runtime in a separate value, that would be
> helpful.

There was a problem with having the two different values in a single
variable before as well. We do have the LTP_RUNTIME_MUL which multiplied
both the timeout and runtime. These should be separate multipliers as
well which multiplied both the timeout and runtime. These should be
separate multipliers as well.
Li Wang Dec. 19, 2024, 1:22 p.m. UTC | #18
On Thu, Dec 19, 2024 at 9:19 PM Cyril Hrubis <chrubis@suse.cz> wrote:

> Hi!
> > > Sounds like we need to have two different runtime values in the tests,
> > > max_runtime which is the timeout for the test and just runtime which is
> > > for how long will the test loop.
> > >
> >
> > Agree, if we define the looping runtime in a separate value, that would
> be
> > helpful.
>
> There was a problem with having the two different values in a single
> variable before as well. We do have the LTP_RUNTIME_MUL which multiplied
> both the timeout and runtime. These should be separate multipliers as
> well which multiplied both the timeout and runtime. These should be
> separate multipliers as well.
>

I noticed that tests using max_runtime to control the test time
always use another function tst_remaining_runtime(), so maybe
we can utilize this to bypass that.

Later I will look into details, now I have a meeting in the coming 2h.

@Petr, @Martin, Feel free to revert the two patches if you needed.
2da30df24e676d5f4cfcf0b11674cbdf11a19b8a
64e11fec079480179aa9473ae5e1e8ad78ef9ac3



>
> --
> Cyril Hrubis
> chrubis@suse.cz
>
>
Petr Vorel Dec. 19, 2024, 1:25 p.m. UTC | #19
> On Thu, Dec 19, 2024 at 9:19 PM Cyril Hrubis <chrubis@suse.cz> wrote:

> > Hi!
> > > > Sounds like we need to have two different runtime values in the tests,
> > > > max_runtime which is the timeout for the test and just runtime which is
> > > > for how long will the test loop.


> > > Agree, if we define the looping runtime in a separate value, that would
> > be
> > > helpful.

> > There was a problem with having the two different values in a single
> > variable before as well. We do have the LTP_RUNTIME_MUL which multiplied
> > both the timeout and runtime. These should be separate multipliers as
> > well which multiplied both the timeout and runtime. These should be
> > separate multipliers as well.


> I noticed that tests using max_runtime to control the test time
> always use another function tst_remaining_runtime(), so maybe
> we can utilize this to bypass that.

> Later I will look into details, now I have a meeting in the coming 2h.

> @Petr, @Martin, Feel free to revert the two patches if you needed.
> 2da30df24e676d5f4cfcf0b11674cbdf11a19b8a
> 64e11fec079480179aa9473ae5e1e8ad78ef9ac3

Thanks. OK, we can wait a bit (few days) until revert. There will be Xmas, thus
not much busy time for us.

Kind regards,
Petr


> > --
> > Cyril Hrubis
> > chrubis@suse.cz
Petr Vorel Dec. 19, 2024, 1:28 p.m. UTC | #20
Hi Li, all,

> On Thu, Dec 19, 2024 at 9:07 PM Petr Vorel <pvorel@suse.cz> wrote:

> > +1. Li, could you please do it?

> Sure, but I am wondering do you running the latest LTP branch in productive
> env?

We mostly run LTP stable branch with some exceptions: openSUSE Tumbleweed and
few products in development. Usually it would be more work to backport fixes
to stable package than running master. But sometimes we get a punishment for
this approach :).

> If so I am sorry to make the test time prolong.
> Or if not, could you give me a period to resolve it in a new patch?

Sure.

Kind regards,
Petr
Cyril Hrubis Dec. 19, 2024, 1:29 p.m. UTC | #21
Hi!
> > Sure, but I am wondering do you running the latest LTP branch in productive
> > env?
> 
> We mostly run LTP stable branch with some exceptions: openSUSE Tumbleweed and
> few products in development. Usually it would be more work to backport fixes
> to stable package than running master. But sometimes we get a punishment for
> this approach :).

It's also best CI for LTP.
Li Wang Dec. 20, 2024, 3:59 a.m. UTC | #22
Hi All,

It suddenly occurred to me that we might be on the wrong track in
solving the timeout problem. Because 'max_runtime' is intended to
set the expected runtime for the main part of the test, and is also
used as the loop termination time by tst_remaining_time(). So the
multiplier has a side effect of extending the looping time of many tests.

But our original intention was to avoid test timeouts on slow systems
(e.g. debug kernels), and the timeout is a hard upper bound to prevent
the test from hanging indefinitely due to environmental factors
(e.g., slow systems, kernel debug mode).

So, shouldn't we only target to extend the timeout value?


--- a/lib/tst_test.c
+++ b/lib/tst_test.c
@@ -555,9 +555,6 @@ static int multiply_runtime(int max_runtime)

        parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);

-       if (tst_has_slow_kconfig())
-               max_runtime *= 4;
-
        return max_runtime * runtime_mul;
 }

@@ -1706,6 +1703,9 @@ unsigned int tst_multiply_timeout(unsigned int
timeout)
        if (timeout < 1)
                tst_brk(TBROK, "timeout must to be >= 1! (%d)", timeout);

+       if (tst_has_slow_kconfig())
+               timeout *= 4;
+
        return timeout * timeout_mul;
 }
Li Wang Dec. 20, 2024, 7:19 a.m. UTC | #23
On Fri, Dec 20, 2024 at 11:59 AM Li Wang <liwang@redhat.com> wrote:

> Hi All,
>
> It suddenly occurred to me that we might be on the wrong track in
> solving the timeout problem. Because 'max_runtime' is intended to
> set the expected runtime for the main part of the test, and is also
> used as the loop termination time by tst_remaining_time(). So the
> multiplier has a side effect of extending the looping time of many tests.
>
> But our original intention was to avoid test timeouts on slow systems
> (e.g. debug kernels), and the timeout is a hard upper bound to prevent
> the test from hanging indefinitely due to environmental factors
> (e.g., slow systems, kernel debug mode).
>
> So, shouldn't we only target to extend the timeout value?
>
>
> --- a/lib/tst_test.c
> +++ b/lib/tst_test.c
> @@ -555,9 +555,6 @@ static int multiply_runtime(int max_runtime)
>
>         parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);
>
> -       if (tst_has_slow_kconfig())
> -               max_runtime *= 4;
> -
>         return max_runtime * runtime_mul;
>  }
>
> @@ -1706,6 +1703,9 @@ unsigned int tst_multiply_timeout(unsigned int
> timeout)
>         if (timeout < 1)
>                 tst_brk(TBROK, "timeout must to be >= 1! (%d)", timeout);
>
> +       if (tst_has_slow_kconfig())
> +               timeout *= 4;
> +
>         return timeout * timeout_mul;
>  }
>


Regarding the starvation.c test, is not a common issue, the test
using max_runtime to complete 1000000 loops in such required
time slot to judge FAIL or PASS is not perfect, while the debug
kernel will throw a false positive once the runtime is not enough
for looping.

It's not about finding the right max_runtime value for all scenarios,
which interferes with our sight. Because their callibrate() doesn't
have much difference from general or debug kernel evaluation, it
only relies on CPU models.

We can treat it as a dedicated case or skip from debug-kernel directly.
diff mbox series

Patch

diff --git a/include/tst_kconfig.h b/include/tst_kconfig.h
index 23f807409..291c34b11 100644
--- a/include/tst_kconfig.h
+++ b/include/tst_kconfig.h
@@ -98,4 +98,17 @@  struct tst_kcmdline_var {
  */
 void tst_kcmdline_parse(struct tst_kcmdline_var params[], size_t params_len);
 
+/*
+ * Check if any performance-degrading kernel configs are enabled.
+ *
+ * This function iterates over the list of slow kernel configuration options
+ * (`tst_slow_kconfigs`) and checks if any of them are enabled in the running kernel.
+ * These options are known to degrade system performance when enabled.
+ *
+ * Return:
+ * - 1 if at least one slow kernel config is enabled.
+ * - 0 if none of the slow kernel configs are enabled.
+ */
+int tst_has_slow_kconfig(void);
+
 #endif	/* TST_KCONFIG_H__ */
diff --git a/lib/tst_kconfig.c b/lib/tst_kconfig.c
index 6d6b1da18..92c27cb35 100644
--- a/lib/tst_kconfig.c
+++ b/lib/tst_kconfig.c
@@ -631,3 +631,42 @@  void tst_kcmdline_parse(struct tst_kcmdline_var params[], size_t params_len)
 
 	SAFE_FCLOSE(f);
 }
+
+/*
+ * List of kernel config options that may degrade performance when enabled.
+ */
+static struct tst_kconfig_var slow_kconfigs[] = {
+	TST_KCONFIG_INIT("CONFIG_PROVE_LOCKING"),
+	TST_KCONFIG_INIT("CONFIG_LOCKDEP"),
+	TST_KCONFIG_INIT("CONFIG_DEBUG_SPINLOCK"),
+	TST_KCONFIG_INIT("CONFIG_DEBUG_RT_MUTEXES"),
+	TST_KCONFIG_INIT("CONFIG_DEBUG_MUTEXES"),
+	TST_KCONFIG_INIT("CONFIG_DEBUG_PAGEALLOC"),
+	TST_KCONFIG_INIT("CONFIG_KASAN"),
+	TST_KCONFIG_INIT("CONFIG_SLUB_RCU_DEBUG"),
+	TST_KCONFIG_INIT("CONFIG_TRACE_IRQFLAGS"),
+	TST_KCONFIG_INIT("CONFIG_LATENCYTOP"),
+	TST_KCONFIG_INIT("CONFIG_DEBUG_NET"),
+	TST_KCONFIG_INIT("CONFIG_EXT4_DEBUG"),
+	TST_KCONFIG_INIT("CONFIG_QUOTA_DEBUG"),
+	TST_KCONFIG_INIT("CONFIG_FAULT_INJECTION"),
+	TST_KCONFIG_INIT("CONFIG_DEBUG_OBJECTS")
+};
+
+int tst_has_slow_kconfig(void)
+{
+	unsigned int i;
+
+	tst_kconfig_read(slow_kconfigs, ARRAY_SIZE(slow_kconfigs));
+
+	for (i = 0; i < ARRAY_SIZE(slow_kconfigs); i++) {
+		if (slow_kconfigs[i].choice == 'y') {
+			tst_res(TINFO,
+				"%s kernel option detected",
+				slow_kconfigs[i].id);
+			return 1;
+		}
+	}
+
+	return 0;
+}
diff --git a/lib/tst_test.c b/lib/tst_test.c
index 8db554dea..f4e667240 100644
--- a/lib/tst_test.c
+++ b/lib/tst_test.c
@@ -555,6 +555,9 @@  static int multiply_runtime(int max_runtime)
 
 	parse_mul(&runtime_mul, "LTP_RUNTIME_MUL", 0.0099, 100);
 
+	if (tst_has_slow_kconfig())
+		max_runtime *= 4;
+
 	return max_runtime * runtime_mul;
 }