Message ID | 20220606114908.962562-4-alexandr.lobakin@intel.com |
---|---|
State | New |
Headers | show |
Series | bitops: let optimize out non-atomic bitops on compile-time constants | expand |
On Mon, Jun 06, 2022 at 01:49:04PM +0200, Alexander Lobakin wrote: > Currently, the generic test_bit() function is defined as a one-liner > and in case with constant bitmaps the compiler is unable to optimize > it to a constant. At the same time, gen_test_and_*_bit() are being > optimized pretty good. > Define gen_test_bit() the same way as they are defined. > > Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Regardless of whether compilers prefer this, I think it's nicer to have the structure consistent with the rest of the functions, so FWIW: Acked-by: Mark Rutland <mark.rutland@arm.com> Mark. > --- > include/asm-generic/bitops/generic-non-atomic.h | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > index 7a60adfa6e7d..202d8a3b40e1 100644 > --- a/include/asm-generic/bitops/generic-non-atomic.h > +++ b/include/asm-generic/bitops/generic-non-atomic.h > @@ -118,7 +118,11 @@ gen___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > static __always_inline int > gen_test_bit(unsigned int nr, const volatile unsigned long *addr) > { > - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > + const unsigned long *p = (const unsigned long *)addr + BIT_WORD(nr); > + unsigned long mask = BIT_MASK(nr); > + unsigned long val = *p; > + > + return !!(val & mask); > } > > #endif /* __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H */ > -- > 2.36.1 >
On Mon, Jun 06, 2022 at 01:49PM +0200, Alexander Lobakin wrote: > Currently, the generic test_bit() function is defined as a one-liner > and in case with constant bitmaps the compiler is unable to optimize > it to a constant. At the same time, gen_test_and_*_bit() are being > optimized pretty good. > Define gen_test_bit() the same way as they are defined. > > Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> > --- > include/asm-generic/bitops/generic-non-atomic.h | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > index 7a60adfa6e7d..202d8a3b40e1 100644 > --- a/include/asm-generic/bitops/generic-non-atomic.h > +++ b/include/asm-generic/bitops/generic-non-atomic.h > @@ -118,7 +118,11 @@ gen___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > static __always_inline int > gen_test_bit(unsigned int nr, const volatile unsigned long *addr) > { > - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > + const unsigned long *p = (const unsigned long *)addr + BIT_WORD(nr); > + unsigned long mask = BIT_MASK(nr); > + unsigned long val = *p; > + > + return !!(val & mask); Unfortunately this makes the dereference of 'addr' non-volatile, and effectively weakens test_bit() to the point where I'd no longer consider it atomic. Per atomic_bitops.txt, test_bit() is atomic. The generic version has been using a volatile access to make it atomic (akin to generic READ_ONCE() casting to volatile). The volatile is also the reason the compiler can't optimize much, because volatile forces a real memory access. Yes, confusingly, test_bit() lives in non-atomic.h, and this had caused confusion before, but the decision was made that moving it will cause headaches for ppc so it was left alone: https://lore.kernel.org/all/87a78xgu8o.fsf@dja-thinkpad.axtens.net/T/#u As for how to make test_bit() more compiler-optimization friendly, I'm guessing that test_bit() needs some special casing where even the generic arch_test_bit() is different from the gen_test_bit(). gen_test_bit() should probably assert that whatever it is called with can actually be evaluated at compile-time so it is never accidentally used otherwise. I would also propose adding a comment close to the deref that test_bit() is atomic and the deref needs to remain volatile, so future people will not try to do the same optimization. Thanks, -- Marco
From: Marco Elver <elver@google.com> Date: Tue, 7 Jun 2022 15:43:49 +0200 > On Mon, Jun 06, 2022 at 01:49PM +0200, Alexander Lobakin wrote: > > Currently, the generic test_bit() function is defined as a one-liner > > and in case with constant bitmaps the compiler is unable to optimize > > it to a constant. At the same time, gen_test_and_*_bit() are being > > optimized pretty good. > > Define gen_test_bit() the same way as they are defined. > > > > Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> > > --- > > include/asm-generic/bitops/generic-non-atomic.h | 6 +++++- > > 1 file changed, 5 insertions(+), 1 deletion(-) > > > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > > index 7a60adfa6e7d..202d8a3b40e1 100644 > > --- a/include/asm-generic/bitops/generic-non-atomic.h > > +++ b/include/asm-generic/bitops/generic-non-atomic.h > > @@ -118,7 +118,11 @@ gen___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > > static __always_inline int > > gen_test_bit(unsigned int nr, const volatile unsigned long *addr) > > { > > - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > > + const unsigned long *p = (const unsigned long *)addr + BIT_WORD(nr); > > + unsigned long mask = BIT_MASK(nr); > > + unsigned long val = *p; > > + > > + return !!(val & mask); > > Unfortunately this makes the dereference of 'addr' non-volatile, and > effectively weakens test_bit() to the point where I'd no longer consider > it atomic. Per atomic_bitops.txt, test_bit() is atomic. > > The generic version has been using a volatile access to make it atomic > (akin to generic READ_ONCE() casting to volatile). The volatile is also > the reason the compiler can't optimize much, because volatile forces a > real memory access. Ah-ha, I see now. Thanks for catching and explaining this! > > Yes, confusingly, test_bit() lives in non-atomic.h, and this had caused > confusion before, but the decision was made that moving it will cause > headaches for ppc so it was left alone: > https://lore.kernel.org/all/87a78xgu8o.fsf@dja-thinkpad.axtens.net/T/#u > > As for how to make test_bit() more compiler-optimization friendly, I'm > guessing that test_bit() needs some special casing where even the > generic arch_test_bit() is different from the gen_test_bit(). > gen_test_bit() should probably assert that whatever it is called with > can actually be evaluated at compile-time so it is never accidentally > used otherwise. I like the idea! Will do in v2. I can move the generics and after, right below them, define 'const_*' helpers which will mostly redirect to 'generic_*', but for test_bit() it will be a separate function with no `volatile` and with an assertion that the input args are constants. > > I would also propose adding a comment close to the deref that test_bit() > is atomic and the deref needs to remain volatile, so future people will > not try to do the same optimization. I think that's also the reason why it's not underscored, right? > > Thanks, > -- Marco Thanks, Olek
On Tue, 7 Jun 2022 at 18:05, Alexander Lobakin <alexandr.lobakin@intel.com> wrote: > > From: Marco Elver <elver@google.com> > Date: Tue, 7 Jun 2022 15:43:49 +0200 > > > On Mon, Jun 06, 2022 at 01:49PM +0200, Alexander Lobakin wrote: > > > Currently, the generic test_bit() function is defined as a one-liner > > > and in case with constant bitmaps the compiler is unable to optimize > > > it to a constant. At the same time, gen_test_and_*_bit() are being > > > optimized pretty good. > > > Define gen_test_bit() the same way as they are defined. > > > > > > Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> > > > --- > > > include/asm-generic/bitops/generic-non-atomic.h | 6 +++++- > > > 1 file changed, 5 insertions(+), 1 deletion(-) > > > > > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > > > index 7a60adfa6e7d..202d8a3b40e1 100644 > > > --- a/include/asm-generic/bitops/generic-non-atomic.h > > > +++ b/include/asm-generic/bitops/generic-non-atomic.h > > > @@ -118,7 +118,11 @@ gen___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > > > static __always_inline int > > > gen_test_bit(unsigned int nr, const volatile unsigned long *addr) > > > { > > > - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > > > + const unsigned long *p = (const unsigned long *)addr + BIT_WORD(nr); > > > + unsigned long mask = BIT_MASK(nr); > > > + unsigned long val = *p; > > > + > > > + return !!(val & mask); > > > > Unfortunately this makes the dereference of 'addr' non-volatile, and > > effectively weakens test_bit() to the point where I'd no longer consider > > it atomic. Per atomic_bitops.txt, test_bit() is atomic. > > > > The generic version has been using a volatile access to make it atomic > > (akin to generic READ_ONCE() casting to volatile). The volatile is also > > the reason the compiler can't optimize much, because volatile forces a > > real memory access. > > Ah-ha, I see now. Thanks for catching and explaining this! > > > > > Yes, confusingly, test_bit() lives in non-atomic.h, and this had caused > > confusion before, but the decision was made that moving it will cause > > headaches for ppc so it was left alone: > > https://lore.kernel.org/all/87a78xgu8o.fsf@dja-thinkpad.axtens.net/T/#u > > > > As for how to make test_bit() more compiler-optimization friendly, I'm > > guessing that test_bit() needs some special casing where even the > > generic arch_test_bit() is different from the gen_test_bit(). > > gen_test_bit() should probably assert that whatever it is called with > > can actually be evaluated at compile-time so it is never accidentally > > used otherwise. > > I like the idea! Will do in v2. > I can move the generics and after, right below them, define > 'const_*' helpers which will mostly redirect to 'generic_*', but > for test_bit() it will be a separate function with no `volatile` > and with an assertion that the input args are constants. Be aware that there's already a "constant_test_bit()" in arch/x86/include/asm/bitops.h, which uses 2 versions of test_bit() if 'nr' is constant or not. I guess you can steer clear of that if you use "const_", but they do sound similar. > > > > I would also propose adding a comment close to the deref that test_bit() > > is atomic and the deref needs to remain volatile, so future people will > > not try to do the same optimization. > > I think that's also the reason why it's not underscored, right? Yes, the naming convention is that double-underscored ones are non-atomic so that's one clue indeed. Documentation/atomic_bitops.txt another, and unlike the other non-atomic bitops, its kernel-doc comment also does not mention "This operation is non-atomic...".
On Tue, Jun 07, 2022 at 05:57:22PM +0200, Alexander Lobakin wrote: > From: Marco Elver <elver@google.com> > Date: Tue, 7 Jun 2022 15:43:49 +0200 > > On Mon, Jun 06, 2022 at 01:49PM +0200, Alexander Lobakin wrote: ... > > I would also propose adding a comment close to the deref that test_bit() > > is atomic and the deref needs to remain volatile, so future people will > > not try to do the same optimization. > > I think that's also the reason why it's not underscored, right? Non-__ prefixed bitops are atomic, __ non-atomic.
diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h index 7a60adfa6e7d..202d8a3b40e1 100644 --- a/include/asm-generic/bitops/generic-non-atomic.h +++ b/include/asm-generic/bitops/generic-non-atomic.h @@ -118,7 +118,11 @@ gen___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) static __always_inline int gen_test_bit(unsigned int nr, const volatile unsigned long *addr) { - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); + const unsigned long *p = (const unsigned long *)addr + BIT_WORD(nr); + unsigned long mask = BIT_MASK(nr); + unsigned long val = *p; + + return !!(val & mask); } #endif /* __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H */
Currently, the generic test_bit() function is defined as a one-liner and in case with constant bitmaps the compiler is unable to optimize it to a constant. At the same time, gen_test_and_*_bit() are being optimized pretty good. Define gen_test_bit() the same way as they are defined. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> --- include/asm-generic/bitops/generic-non-atomic.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)