Message ID | 20220606114908.962562-1-alexandr.lobakin@intel.com |
---|---|
Headers | show |
Series | bitops: let optimize out non-atomic bitops on compile-time constants | expand |
On Mon, Jun 06, 2022 at 01:49:01PM +0200, Alexander Lobakin wrote: > While I was working on converting some structure fields from a fixed > type to a bitmap, I started observing code size increase not only in > places where the code works with the converted structure fields, but > also where the converted vars were on the stack. That said, the > following code: > > DECLARE_BITMAP(foo, BITS_PER_LONG) = { }; // -> unsigned long foo[1]; > unsigned long bar = BIT(BAR_BIT); > unsigned long baz = 0; > > __set_bit(FOO_BIT, foo); > baz |= BIT(BAZ_BIT); > > BUILD_BUG_ON(!__builtin_constant_p(test_bit(FOO_BIT, foo)); > BUILD_BUG_ON(!__builtin_constant_p(bar & BAR_BIT)); > BUILD_BUG_ON(!__builtin_constant_p(baz & BAZ_BIT)); > > triggers the first assertion on x86_64, which means that the > compiler is unable to evaluate it to a compile-time initializer > when the architecture-specific bitop is used even if it's obvious. > I found that this is due to that many architecture-specific > non-atomic bitop implementations use inline asm or other hacks which > are faster or more robust when working with "real" variables (i.e. > fields from the structures etc.), but the compilers have no clue how > to optimize them out when called on compile-time constants. > > So, in order to let the compiler optimize out such cases, expand the > test_bit() and __*_bit() definitions with a compile-time condition > check, so that they will pick the generic C non-atomic bitop > implementations when all of the arguments passed are compile-time > constants, which means that the result will be a compile-time > constant as well and the compiler will produce more efficient and > simple code in 100% cases (no changes when there's at least one > non-compile-time-constant argument). > The condition itself: > > if ( > __builtin_constant_p(nr) && /* <- bit position is constant */ > __builtin_constant_p(!!addr) && /* <- compiler knows bitmap addr is > always either NULL or not */ > addr && /* <- bitmap addr is not NULL */ > __builtin_constant_p(*addr) /* <- compiler knows the value of > the target bitmap */ > ) > /* then pick the generic C variant > else > /* old code path, arch-specific > > I also tried __is_constexpr() as suggested by Andy, but it was > always returning 0 ('not a constant') for the 2,3 and 4th > conditions. > > The savings on x86_64 with LLVM are insane (.text): > > $ scripts/bloat-o-meter -c vmlinux.{base,test} > add/remove: 72/75 grow/shrink: 182/518 up/down: 53925/-137810 (-83885) FWIW, I gave this series a spin on arm64 with GCC 11.1.0 and LLVM 14.0.0. Using defconfig and v5.19-rc1 as a baseline, and we get about half that difference with LLVM (and ~1/20th when using GCC): % ./scripts/bloat-o-meter -t vmlinux-llvm-baseline vmlinux-llvm-bitops add/remove: 21/11 grow/shrink: 620/651 up/down: 12060/-15824 (-3764) ... Total: Before=16874812, After=16871048, chg -0.02% % ./scripts/bloat-o-meter -t vmlinux-gcc-baseline vmlinux-gcc-bitops add/remove: 92/29 grow/shrink: 933/2766 up/down: 39340/-82580 (-43240) ... Total: Before=16337480, After=16294240, chg -0.26% The vmlinux files are correspondingly smaller: % ls -al vmlinux-* -rwxr-xr-x 1 mark mark 44262216 Jun 6 13:58 vmlinux-gcc-baseline -rwxr-xr-x 1 mark mark 44265240 Jun 6 14:01 vmlinux-gcc-bitops -rwxr-xr-x 1 mark mark 43573760 Jun 6 14:07 vmlinux-llvm-baseline -rwxr-xr-x 1 mark mark 43572560 Jun 6 14:11 vmlinux-llvm-bitops ... though due to padding and alignment, the resulting Image is the same size: % ls -al Image* -rw-r--r-- 1 mark mark 36311552 Jun 6 13:58 Image-gcc-baseline -rw-r--r-- 1 mark mark 36311552 Jun 6 14:01 Image-gcc-bitops -rwxr-xr-x 1 mark mark 31218176 Jun 6 14:07 Image-llvm-baseline -rwxr-xr-x 1 mark mark 31218176 Jun 6 14:11 Image-llvm-bitops (as an aside, I need to go look at why the GCC Image is 5MB larger!). Overall this looks like a nice cleanup/rework; I'll take a look over the remaining patches in a bit. Thanks, Mark. > > $ scripts/bloat-o-meter -c vmlinux.{base,mod} > add/remove: 7/1 grow/shrink: 1/19 up/down: 1135/-4082 (-2947) > > $ scripts/bloat-o-meter -c vmlinux.{base,all} > add/remove: 79/76 grow/shrink: 184/537 up/down: 55076/-141892 (-86816) > > And the following: > > DECLARE_BITMAP(flags, __IP_TUNNEL_FLAG_NUM) = { }; > __be16 flags; > > __set_bit(IP_TUNNEL_CSUM_BIT, flags); > > tun_flags = cpu_to_be16(*flags & U16_MAX); > > if (test_bit(IP_TUNNEL_VTI_BIT, flags)) > tun_flags |= VTI_ISVTI; > > BUILD_BUG_ON(!__builtin_constant_p(tun_flags)); > > doesn't blow up anymore. > > The series has been in intel-next for a while with no reported issues. > Also available on my open GH[0]. > > [0] https://github.com/alobakin/linux/commits/bitops > > Alexander Lobakin (6): > ia64, processor: fix -Wincompatible-pointer-types in ia64_get_irr() > bitops: always define asm-generic non-atomic bitops > bitops: define gen_test_bit() the same way as the rest of functions > bitops: unify non-atomic bitops prototypes across architectures > bitops: wrap non-atomic bitops with a transparent macro > bitops: let optimize out non-atomic bitops on compile-time constants > > arch/alpha/include/asm/bitops.h | 28 ++-- > arch/hexagon/include/asm/bitops.h | 23 ++-- > arch/ia64/include/asm/bitops.h | 40 +++--- > arch/ia64/include/asm/processor.h | 2 +- > arch/m68k/include/asm/bitops.h | 47 +++++-- > arch/sh/include/asm/bitops-op32.h | 32 +++-- > arch/sparc/include/asm/bitops_32.h | 18 +-- > arch/sparc/lib/atomic32.c | 12 +- > .../asm-generic/bitops/generic-non-atomic.h | 128 ++++++++++++++++++ > .../bitops/instrumented-non-atomic.h | 35 +++-- > include/asm-generic/bitops/non-atomic.h | 123 ++--------------- > include/linux/bitops.h | 45 ++++++ > tools/include/asm-generic/bitops/non-atomic.h | 34 +++-- > tools/include/linux/bitops.h | 16 +++ > 14 files changed, 363 insertions(+), 220 deletions(-) > create mode 100644 include/asm-generic/bitops/generic-non-atomic.h > > base-commit: f2906aa863381afb0015a9eb7fefad885d4e5a56 > -- > 2.36.1 >
Hi Alexander, On Mon, Jun 6, 2022 at 1:50 PM Alexander Lobakin <alexandr.lobakin@intel.com> wrote: > While I was working on converting some structure fields from a fixed > type to a bitmap, I started observing code size increase not only in > places where the code works with the converted structure fields, but > also where the converted vars were on the stack. That said, the > following code: > > DECLARE_BITMAP(foo, BITS_PER_LONG) = { }; // -> unsigned long foo[1]; > unsigned long bar = BIT(BAR_BIT); > unsigned long baz = 0; > > __set_bit(FOO_BIT, foo); > baz |= BIT(BAZ_BIT); > > BUILD_BUG_ON(!__builtin_constant_p(test_bit(FOO_BIT, foo)); > BUILD_BUG_ON(!__builtin_constant_p(bar & BAR_BIT)); > BUILD_BUG_ON(!__builtin_constant_p(baz & BAZ_BIT)); > > triggers the first assertion on x86_64, which means that the > compiler is unable to evaluate it to a compile-time initializer > when the architecture-specific bitop is used even if it's obvious. > I found that this is due to that many architecture-specific > non-atomic bitop implementations use inline asm or other hacks which > are faster or more robust when working with "real" variables (i.e. > fields from the structures etc.), but the compilers have no clue how > to optimize them out when called on compile-time constants. > > So, in order to let the compiler optimize out such cases, expand the > test_bit() and __*_bit() definitions with a compile-time condition > check, so that they will pick the generic C non-atomic bitop > implementations when all of the arguments passed are compile-time > constants, which means that the result will be a compile-time > constant as well and the compiler will produce more efficient and > simple code in 100% cases (no changes when there's at least one > non-compile-time-constant argument). > The condition itself: > > if ( > __builtin_constant_p(nr) && /* <- bit position is constant */ > __builtin_constant_p(!!addr) && /* <- compiler knows bitmap addr is > always either NULL or not */ > addr && /* <- bitmap addr is not NULL */ > __builtin_constant_p(*addr) /* <- compiler knows the value of > the target bitmap */ > ) > /* then pick the generic C variant > else > /* old code path, arch-specific > > I also tried __is_constexpr() as suggested by Andy, but it was > always returning 0 ('not a constant') for the 2,3 and 4th > conditions. > > The savings on x86_64 with LLVM are insane (.text): > > $ scripts/bloat-o-meter -c vmlinux.{base,test} > add/remove: 72/75 grow/shrink: 182/518 up/down: 53925/-137810 (-83885) > > $ scripts/bloat-o-meter -c vmlinux.{base,mod} > add/remove: 7/1 grow/shrink: 1/19 up/down: 1135/-4082 (-2947) > > $ scripts/bloat-o-meter -c vmlinux.{base,all} > add/remove: 79/76 grow/shrink: 184/537 up/down: 55076/-141892 (-86816) Thank you! I gave it a try on m68k, and am a bit disappointed seeing an increase in code size: add/remove: 49/13 grow/shrink: 279/138 up/down: 6434/-3342 (3092) This is atari_defconfig on a tree based on v5.19-rc1, with m68k-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34). Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds
From: Geert Uytterhoeven <geert@linux-m68k.org> Date: Tue, 7 Jun 2022 14:45:41 +0200 > Hi Alexander, Hi! > > On Mon, Jun 6, 2022 at 1:50 PM Alexander Lobakin > <alexandr.lobakin@intel.com> wrote: > > While I was working on converting some structure fields from a fixed > > type to a bitmap, I started observing code size increase not only in > > places where the code works with the converted structure fields, but > > also where the converted vars were on the stack. That said, the > > following code: > > > > DECLARE_BITMAP(foo, BITS_PER_LONG) = { }; // -> unsigned long foo[1]; > > unsigned long bar = BIT(BAR_BIT); > > unsigned long baz = 0; > > > > __set_bit(FOO_BIT, foo); > > baz |= BIT(BAZ_BIT); > > > > BUILD_BUG_ON(!__builtin_constant_p(test_bit(FOO_BIT, foo)); > > BUILD_BUG_ON(!__builtin_constant_p(bar & BAR_BIT)); > > BUILD_BUG_ON(!__builtin_constant_p(baz & BAZ_BIT)); > > > > triggers the first assertion on x86_64, which means that the > > compiler is unable to evaluate it to a compile-time initializer > > when the architecture-specific bitop is used even if it's obvious. > > I found that this is due to that many architecture-specific > > non-atomic bitop implementations use inline asm or other hacks which > > are faster or more robust when working with "real" variables (i.e. > > fields from the structures etc.), but the compilers have no clue how > > to optimize them out when called on compile-time constants. > > > > So, in order to let the compiler optimize out such cases, expand the > > test_bit() and __*_bit() definitions with a compile-time condition > > check, so that they will pick the generic C non-atomic bitop > > implementations when all of the arguments passed are compile-time > > constants, which means that the result will be a compile-time > > constant as well and the compiler will produce more efficient and > > simple code in 100% cases (no changes when there's at least one > > non-compile-time-constant argument). > > The condition itself: > > > > if ( > > __builtin_constant_p(nr) && /* <- bit position is constant */ > > __builtin_constant_p(!!addr) && /* <- compiler knows bitmap addr is > > always either NULL or not */ > > addr && /* <- bitmap addr is not NULL */ > > __builtin_constant_p(*addr) /* <- compiler knows the value of > > the target bitmap */ > > ) > > /* then pick the generic C variant > > else > > /* old code path, arch-specific > > > > I also tried __is_constexpr() as suggested by Andy, but it was > > always returning 0 ('not a constant') for the 2,3 and 4th > > conditions. > > > > The savings on x86_64 with LLVM are insane (.text): > > > > $ scripts/bloat-o-meter -c vmlinux.{base,test} > > add/remove: 72/75 grow/shrink: 182/518 up/down: 53925/-137810 (-83885) > > > > $ scripts/bloat-o-meter -c vmlinux.{base,mod} > > add/remove: 7/1 grow/shrink: 1/19 up/down: 1135/-4082 (-2947) > > > > $ scripts/bloat-o-meter -c vmlinux.{base,all} > > add/remove: 79/76 grow/shrink: 184/537 up/down: 55076/-141892 (-86816) > > Thank you! > > I gave it a try on m68k, and am a bit disappointed seeing an increase > in code size: > > add/remove: 49/13 grow/shrink: 279/138 up/down: 6434/-3342 (3092) Ufff, that sucks =\ Could you please try to compile the following code snippet (with the series applied)? unsigned long map; bitmap_zero(&map, BITS_PER_LONG); __set_bit(1, &map); BUILD_BUG_ON(!__builtin_constant_p(map)); If it fails during the vmlinux linkage, it will mean that on your architecture/setup the compiler is unable to optimize the generic implementations to compile-time constants and I'll need to debug this more (probably via some compiler explorer). You could also check the vmlinux size after applying each patch to see which one does this if you feel like it :) > > This is atari_defconfig on a tree based on v5.19-rc1, with > m68k-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0, GNU ld (GNU > Binutils for Ubuntu) 2.34). > > Gr{oetje,eeting}s, > > Geert > > -- > Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org > > In personal conversations with technical people, I call myself a hacker. But > when I'm talking to journalists I just say "programmer" or something like that. > -- Linus Torvalds Thanks, Olek
Hi Olek, On Tue, Jun 7, 2022 at 5:51 PM Alexander Lobakin <alexandr.lobakin@intel.com> wrote: > From: Geert Uytterhoeven <geert@linux-m68k.org> > > On Mon, Jun 6, 2022 at 1:50 PM Alexander Lobakin > > <alexandr.lobakin@intel.com> wrote: > > > While I was working on converting some structure fields from a fixed > > > type to a bitmap, I started observing code size increase not only in > > > places where the code works with the converted structure fields, but > > > also where the converted vars were on the stack. That said, the > > > following code: > > > > > > DECLARE_BITMAP(foo, BITS_PER_LONG) = { }; // -> unsigned long foo[1]; > > > unsigned long bar = BIT(BAR_BIT); > > > unsigned long baz = 0; > > > > > > __set_bit(FOO_BIT, foo); > > > baz |= BIT(BAZ_BIT); > > > > > > BUILD_BUG_ON(!__builtin_constant_p(test_bit(FOO_BIT, foo)); > > > BUILD_BUG_ON(!__builtin_constant_p(bar & BAR_BIT)); > > > BUILD_BUG_ON(!__builtin_constant_p(baz & BAZ_BIT)); > > > > > > triggers the first assertion on x86_64, which means that the > > > compiler is unable to evaluate it to a compile-time initializer > > > when the architecture-specific bitop is used even if it's obvious. > > > I found that this is due to that many architecture-specific > > > non-atomic bitop implementations use inline asm or other hacks which > > > are faster or more robust when working with "real" variables (i.e. > > > fields from the structures etc.), but the compilers have no clue how > > > to optimize them out when called on compile-time constants. > > > > > > So, in order to let the compiler optimize out such cases, expand the > > > test_bit() and __*_bit() definitions with a compile-time condition > > > check, so that they will pick the generic C non-atomic bitop > > > implementations when all of the arguments passed are compile-time > > > constants, which means that the result will be a compile-time > > > constant as well and the compiler will produce more efficient and > > > simple code in 100% cases (no changes when there's at least one > > > non-compile-time-constant argument). > > > The condition itself: > > > > > > if ( > > > __builtin_constant_p(nr) && /* <- bit position is constant */ > > > __builtin_constant_p(!!addr) && /* <- compiler knows bitmap addr is > > > always either NULL or not */ > > > addr && /* <- bitmap addr is not NULL */ > > > __builtin_constant_p(*addr) /* <- compiler knows the value of > > > the target bitmap */ > > > ) > > > /* then pick the generic C variant > > > else > > > /* old code path, arch-specific > > > > > > I also tried __is_constexpr() as suggested by Andy, but it was > > > always returning 0 ('not a constant') for the 2,3 and 4th > > > conditions. > > > > > > The savings on x86_64 with LLVM are insane (.text): > > > > > > $ scripts/bloat-o-meter -c vmlinux.{base,test} > > > add/remove: 72/75 grow/shrink: 182/518 up/down: 53925/-137810 (-83885) > > > > > > $ scripts/bloat-o-meter -c vmlinux.{base,mod} > > > add/remove: 7/1 grow/shrink: 1/19 up/down: 1135/-4082 (-2947) > > > > > > $ scripts/bloat-o-meter -c vmlinux.{base,all} > > > add/remove: 79/76 grow/shrink: 184/537 up/down: 55076/-141892 (-86816) > > > > Thank you! > > > > I gave it a try on m68k, and am a bit disappointed seeing an increase > > in code size: > > > > add/remove: 49/13 grow/shrink: 279/138 up/down: 6434/-3342 (3092) > > Ufff, that sucks =\ > Could you please try to compile the following code snippet (with the > series applied)? > > unsigned long map; > > bitmap_zero(&map, BITS_PER_LONG); > __set_bit(1, &map); > BUILD_BUG_ON(!__builtin_constant_p(map)); > > If it fails during the vmlinux linkage, it will mean that on your > architecture/setup the compiler is unable to optimize the generic > implementations to compile-time constants and I'll need to debug > this more (probably via some compiler explorer). Builds and links fine. > You could also check the vmlinux size after applying each patch > to see which one does this if you feel like it :) The (incremental) impact of the various patches is shown below: bitops: unify non-atomic bitops prototypes across architectures add/remove: 4/11 grow/shrink: 123/160 up/down: 1700/-2786 (-1086) bitops: let optimize out non-atomic bitops on compile-time constants add/remove: 50/7 grow/shrink: 280/101 up/down: 6798/-2620 (4178) I.e. the total impact is -1086 + 4178 = +3092 Looking at the impact of the last change on a single file, with rather small functions to make it easier to analyze, the results are: bloat-o-meter net/core/sock.o{.orig,} add/remove: 3/1 grow/shrink: 20/3 up/down: 286/-68 (218) Function old new delta sock_flag - 38 +38 sock_set_flag - 22 +22 sock_reset_flag - 22 +22 sock_recv_errqueue 264 284 +20 sock_alloc_send_pskb 406 424 +18 __sock_set_timestamps 104 122 +18 sock_setsockopt 2412 2428 +16 sock_pfree 52 66 +14 sock_wfree 236 248 +12 sk_wait_data 222 234 +12 sk_destruct 70 82 +12 sk_wake_async 40 50 +10 sk_set_memalloc 74 84 +10 __sock_queue_rcv_skb 254 264 +10 __sk_backlog_rcv 92 102 +10 sock_getsockopt 1734 1742 +8 sock_no_linger 36 42 +6 sk_clone_lock 478 484 +6 sk_clear_memalloc 98 104 +6 __sk_receive_skb 194 200 +6 sock_init_data 344 348 +4 __sock_cmsg_send 196 200 +4 sk_common_release 152 154 +2 sock_set_keepalive 62 60 -2 sock_enable_timestamp 80 72 -8 sock_valbool_flag 34 12 -22 bset_mem_set_bit 36 - -36 Total: Before=18862, After=19080, chg +1.16% Several small inline functions are no longer inlined. And e.g. __sk_backlog_rcv() increases as it now calls sock_flag() out-of-line, and needs to save more on the stack: __sk_backlog_rcv: + move.l %a3,-(%sp) |, move.l %d2,-(%sp) |, - move.l 8(%sp),%a0 | sk, sk -| arch/m68k/include/asm/bitops.h:163: return (addr[nr >> 5] & (1UL << (nr & 31))) != 0; - move.l 76(%a0),%d0 | MEM[(const long unsigned int *)sk_6(D) + 76B], _14 + move.l 12(%sp),%a3 | sk, sk | net/core/sock.c:330: BUG_ON(!sock_flag(sk, SOCK_MEMALLOC)); - btst #14,%d0 |, _14 - jne .L193 | + pea 14.w | + move.l %a3,-(%sp) | sk, + jsr sock_flag | + addq.l #8,%sp |, + tst.b %d0 | tmp50 + jne .L192 | pea 330.w | pea .LC0 | pea .LC3 | jsr _printk | trap #7 Note that the above is with atari_defconfig, which has CONFIG_CC_OPTIMIZE_FOR_SIZE=y. Switching to CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y results in a kernel that is ca. 25% larger, and the net impact of this series is: add/remove: 24/27 grow/shrink: 227/233 up/down: 7494/-8080 (-586) i.e. a reduction in size... Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds
From: Geert Uytterhoeven <geert@linux-m68k.org> Date: Wed, 8 Jun 2022 11:55:08 +0200 > Hi Olek, > > On Tue, Jun 7, 2022 at 5:51 PM Alexander Lobakin > <alexandr.lobakin@intel.com> wrote: > > From: Geert Uytterhoeven <geert@linux-m68k.org> > > > On Mon, Jun 6, 2022 at 1:50 PM Alexander Lobakin > > > <alexandr.lobakin@intel.com> wrote: > > > > While I was working on converting some structure fields from a fixed > > > > type to a bitmap, I started observing code size increase not only in > > > > places where the code works with the converted structure fields, but > > > > also where the converted vars were on the stack. That said, the > > > > following code: > > > > > > > > DECLARE_BITMAP(foo, BITS_PER_LONG) = { }; // -> unsigned long foo[1]; > > > > unsigned long bar = BIT(BAR_BIT); > > > > unsigned long baz = 0; > > > > > > > > __set_bit(FOO_BIT, foo); > > > > baz |= BIT(BAZ_BIT); > > > > > > > > BUILD_BUG_ON(!__builtin_constant_p(test_bit(FOO_BIT, foo)); > > > > BUILD_BUG_ON(!__builtin_constant_p(bar & BAR_BIT)); > > > > BUILD_BUG_ON(!__builtin_constant_p(baz & BAZ_BIT)); > > > > > > > > triggers the first assertion on x86_64, which means that the > > > > compiler is unable to evaluate it to a compile-time initializer > > > > when the architecture-specific bitop is used even if it's obvious. > > > > I found that this is due to that many architecture-specific > > > > non-atomic bitop implementations use inline asm or other hacks which > > > > are faster or more robust when working with "real" variables (i.e. > > > > fields from the structures etc.), but the compilers have no clue how > > > > to optimize them out when called on compile-time constants. > > > > > > > > So, in order to let the compiler optimize out such cases, expand the > > > > test_bit() and __*_bit() definitions with a compile-time condition > > > > check, so that they will pick the generic C non-atomic bitop > > > > implementations when all of the arguments passed are compile-time > > > > constants, which means that the result will be a compile-time > > > > constant as well and the compiler will produce more efficient and > > > > simple code in 100% cases (no changes when there's at least one > > > > non-compile-time-constant argument). > > > > The condition itself: > > > > > > > > if ( > > > > __builtin_constant_p(nr) && /* <- bit position is constant */ > > > > __builtin_constant_p(!!addr) && /* <- compiler knows bitmap addr is > > > > always either NULL or not */ > > > > addr && /* <- bitmap addr is not NULL */ > > > > __builtin_constant_p(*addr) /* <- compiler knows the value of > > > > the target bitmap */ > > > > ) > > > > /* then pick the generic C variant > > > > else > > > > /* old code path, arch-specific > > > > > > > > I also tried __is_constexpr() as suggested by Andy, but it was > > > > always returning 0 ('not a constant') for the 2,3 and 4th > > > > conditions. > > > > > > > > The savings on x86_64 with LLVM are insane (.text): > > > > > > > > $ scripts/bloat-o-meter -c vmlinux.{base,test} > > > > add/remove: 72/75 grow/shrink: 182/518 up/down: 53925/-137810 (-83885) > > > > > > > > $ scripts/bloat-o-meter -c vmlinux.{base,mod} > > > > add/remove: 7/1 grow/shrink: 1/19 up/down: 1135/-4082 (-2947) > > > > > > > > $ scripts/bloat-o-meter -c vmlinux.{base,all} > > > > add/remove: 79/76 grow/shrink: 184/537 up/down: 55076/-141892 (-86816) > > > > > > Thank you! > > > > > > I gave it a try on m68k, and am a bit disappointed seeing an increase > > > in code size: > > > > > > add/remove: 49/13 grow/shrink: 279/138 up/down: 6434/-3342 (3092) > > > > Ufff, that sucks =\ > > Could you please try to compile the following code snippet (with the > > series applied)? > > > > unsigned long map; > > > > bitmap_zero(&map, BITS_PER_LONG); > > __set_bit(1, &map); > > BUILD_BUG_ON(!__builtin_constant_p(map)); > > > > If it fails during the vmlinux linkage, it will mean that on your > > architecture/setup the compiler is unable to optimize the generic > > implementations to compile-time constants and I'll need to debug > > this more (probably via some compiler explorer). > > Builds and links fine. Ok, so that means the main task is still achieved. > > > You could also check the vmlinux size after applying each patch > > to see which one does this if you feel like it :) > > The (incremental) impact of the various patches is shown below: > > bitops: unify non-atomic bitops prototypes across architectures > add/remove: 4/11 grow/shrink: 123/160 up/down: 1700/-2786 (-1086) > > bitops: let optimize out non-atomic bitops on compile-time constants > add/remove: 50/7 grow/shrink: 280/101 up/down: 6798/-2620 (4178) > > I.e. the total impact is -1086 + 4178 = +3092 > > Looking at the impact of the last change on a single file, with rather > small functions to make it easier to analyze, the results are: > > bloat-o-meter net/core/sock.o{.orig,} > add/remove: 3/1 grow/shrink: 20/3 up/down: 286/-68 (218) > Function old new delta > sock_flag - 38 +38 > sock_set_flag - 22 +22 > sock_reset_flag - 22 +22 > sock_recv_errqueue 264 284 +20 > sock_alloc_send_pskb 406 424 +18 > __sock_set_timestamps 104 122 +18 > sock_setsockopt 2412 2428 +16 > sock_pfree 52 66 +14 > sock_wfree 236 248 +12 > sk_wait_data 222 234 +12 > sk_destruct 70 82 +12 > sk_wake_async 40 50 +10 > sk_set_memalloc 74 84 +10 > __sock_queue_rcv_skb 254 264 +10 > __sk_backlog_rcv 92 102 +10 > sock_getsockopt 1734 1742 +8 > sock_no_linger 36 42 +6 > sk_clone_lock 478 484 +6 > sk_clear_memalloc 98 104 +6 > __sk_receive_skb 194 200 +6 > sock_init_data 344 348 +4 > __sock_cmsg_send 196 200 +4 > sk_common_release 152 154 +2 > sock_set_keepalive 62 60 -2 > sock_enable_timestamp 80 72 -8 > sock_valbool_flag 34 12 -22 > bset_mem_set_bit 36 - -36 > Total: Before=18862, After=19080, chg +1.16% > > Several small inline functions are no longer inlined. > And e.g. __sk_backlog_rcv() increases as it now calls sock_flag() > out-of-line, and needs to save more on the stack: > > __sk_backlog_rcv: > + move.l %a3,-(%sp) |, > move.l %d2,-(%sp) |, > - move.l 8(%sp),%a0 | sk, sk > -| arch/m68k/include/asm/bitops.h:163: return (addr[nr >> 5] & (1UL > << (nr & 31))) != 0; > - move.l 76(%a0),%d0 | MEM[(const long unsigned int > *)sk_6(D) + 76B], _14 > + move.l 12(%sp),%a3 | sk, sk > | net/core/sock.c:330: BUG_ON(!sock_flag(sk, SOCK_MEMALLOC)); > - btst #14,%d0 |, _14 > - jne .L193 | > + pea 14.w | > + move.l %a3,-(%sp) | sk, > + jsr sock_flag | > + addq.l #8,%sp |, > + tst.b %d0 | tmp50 > + jne .L192 | > pea 330.w | > pea .LC0 | > pea .LC3 | > jsr _printk | > trap #7 > > Note that the above is with atari_defconfig, which has > CONFIG_CC_OPTIMIZE_FOR_SIZE=y. > > Switching to CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y results in a kernel > that is ca. 25% larger, and the net impact of this series is: > > add/remove: 24/27 grow/shrink: 227/233 up/down: 7494/-8080 (-586) > > i.e. a reduction in size... Much thanks for such an analysis! I see that previously it was uninlining bset_mem_set_bit(), which is m68k implementation of __set_bit(), and now it uninlined the "outer" sock_*_flag(). I'd name it sort of normal, but the intention was to not change the existing code when we're unable to optimize it out... And those sock helpers are usually from that side, since @sk is never anything constant. It's unexpected that the compiler is unable to resolve the expression to the original bitop call on `-Os`. Dunno if it's okay <O> __builtin_constant_p() still gives the desired results, so I guess it is, but at the same time that function layout rebalance can potentially hurt performance. Or improve it ._. BTW, after updating LLVM 13 -> 14 I'm now getting -3Kb (was -86). GCC 12 still gives -30. > > Gr{oetje,eeting}s, > > Geert > > -- > Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org > > In personal conversations with technical people, I call myself a hacker. But > when I'm talking to journalists I just say "programmer" or something like that. > -- Linus Torvalds Thanks, Olek