diff mbox series

[V2] Don't reduce estimated unrolled size for innermost loop at cunrolli.

Message ID 20240522050734.1129622-1-hongtao.liu@intel.com
State New
Headers show
Series [V2] Don't reduce estimated unrolled size for innermost loop at cunrolli. | expand

Commit Message

liuhongt May 22, 2024, 5:07 a.m. UTC
>> Hard to find a default value satisfying all testcases.
>> some require loop unroll with 7 insns increment, some don't want loop
>> unroll w/ 5 insn increment.
>> The original 2/3 reduction happened to meet all those testcases(or the
>> testcases are constructed based on the old 2/3).
>> Can we define the parameter as the size of the loop, below the size we
>> still do the reduction, so the small loop can be unrolled?

>Yeah, that's also a sensible possibility.  Does it work to have a parameter
>for the unrolled body size?  Thus, amend the existing
>--param max-completely-peeled-insns with a --param
>max-completely-peeled-insns-nogrowth?

Update V2:
It's still hard to find a default value for loop boday size. So I move the
2 / 3 reduction from estimated_unrolled_size to try_unroll_loop_completely.
For the check of body size shrink, 2 / 3 reduction is added, so small loops
can still be unrolled.
For the check of comparison between body size and param_max_completely_peeled_insns,
2 / 3 is conditionally added for loop->inner || !cunrolli.
Then the patch avoid gcc testsuite regression, and also prevent big inner loop
completely unrolled at cunrolli.

------------------

For the innermost loop, after completely loop unroll, it will most likely
not be able to reduce the body size to 2/3. The current 2/3 reduction
will make some of the larger loops completely unrolled during
cunrolli, which will then result in them not being able to be
vectorized. It also increases the register pressure. The patch move
from estimated_unrolled_size to
the 2/3 reduction at cunrolli.

Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
Ok for trunk?

gcc/ChangeLog:

	PR tree-optimization/112325
	* tree-ssa-loop-ivcanon.cc (estimated_unrolled_size): Move the
	2 / 3 loop body size reduction to ..
	(try_unroll_loop_completely): .. here, add it for the check of
	body size shrink, and the check of comparison against
	param_max_completely_peeled_insns when
	(!cunrolli ||loop->inner).
	(canonicalize_loop_induction_variables): Add new parameter
	cunrolli and pass down.
	(tree_unroll_loops_completely_1): Ditto.
	(tree_unroll_loops_completely): Ditto.
	(canonicalize_induction_variables): Handle new parameter.
	(pass_complete_unrolli::execute): Ditto.
	(pass_complete_unroll::execute): Ditto.

gcc/testsuite/ChangeLog:

	* gcc.dg/tree-ssa/pr112325.c: New test.
	* gcc.dg/vect/pr69783.c: Add extra option --param
	max-completely-peeled-insns=300.
---
 gcc/testsuite/gcc.dg/tree-ssa/pr112325.c | 57 ++++++++++++++++++++++++
 gcc/testsuite/gcc.dg/vect/pr69783.c      |  2 +-
 gcc/tree-ssa-loop-ivcanon.cc             | 45 ++++++++++---------
 3 files changed, 83 insertions(+), 21 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/pr112325.c

Comments

Hongtao Liu May 23, 2024, 1:55 a.m. UTC | #1
On Wed, May 22, 2024 at 1:07 PM liuhongt <hongtao.liu@intel.com> wrote:
>
> >> Hard to find a default value satisfying all testcases.
> >> some require loop unroll with 7 insns increment, some don't want loop
> >> unroll w/ 5 insn increment.
> >> The original 2/3 reduction happened to meet all those testcases(or the
> >> testcases are constructed based on the old 2/3).
> >> Can we define the parameter as the size of the loop, below the size we
> >> still do the reduction, so the small loop can be unrolled?
>
> >Yeah, that's also a sensible possibility.  Does it work to have a parameter
> >for the unrolled body size?  Thus, amend the existing
> >--param max-completely-peeled-insns with a --param
> >max-completely-peeled-insns-nogrowth?
>
> Update V2:
> It's still hard to find a default value for loop boday size. So I move the
> 2 / 3 reduction from estimated_unrolled_size to try_unroll_loop_completely.
> For the check of body size shrink, 2 / 3 reduction is added, so small loops
> can still be unrolled.
> For the check of comparison between body size and param_max_completely_peeled_insns,
> 2 / 3 is conditionally added for loop->inner || !cunrolli.
> Then the patch avoid gcc testsuite regression, and also prevent big inner loop
> completely unrolled at cunrolli.
The patch regressed arm-*-eabi for

FAIL: 3 regressions



regressions.sum:

                                === gcc tests ===



Running gcc:gcc.dg/tree-ssa/tree-ssa.exp ...

FAIL: gcc.dg/tree-ssa/pr83403-1.c scan-tree-dump-times lim2 "Executing
store motion of" 10

FAIL: gcc.dg/tree-ssa/pr83403-2.c scan-tree-dump-times lim2 "Executing
store motion of" 10

                                === gfortran tests ===



Running gfortran:gfortran.dg/dg.exp ...

FAIL: gfortran.dg/reassoc_4.f -O   scan-tree-dump-times reassoc1 "[0-9] \\* " 22

for 32-bit arm, estimate_num_insns_seq returns more for load/store of double.

The loop in pr83403-1.c
 198Estimating sizes for loop 4
 199 BB: 6, after_exit: 0
 200  size:   2 if (m_23 != 10)
 201   Exit condition will be eliminated in peeled copies.
 202   Exit condition will be eliminated in last copy.
 203   Constant conditional.
 204 BB: 5, after_exit: 1
 205  size:   1 _5 = n_24 * 10;
 206  size:   1 _6 = _5 + m_23;
 207  size:   1 _7 = _6 * 8;
 208  size:   1 _8 = C_35 + _7;
 209  size:   2 _9 = *_8;
 210  size:   1 _10 = k_25 * 20;
 211  size:   1 _11 = _10 + m_23;
 212  size:   1 _12 = _11 * 8;
 213  size:   1 _13 = A_31 + _12;
 214  size:   2 _14 = *_13;
 215  size:   1 _15 = n_24 * 20;
 216  size:   1 _16 = _15 + k_25;
 217  size:   1 _17 = _16 * 8;
 218  size:   1 _18 = B_33 + _17;
 219  size:   2 _19 = *_18;
 220  size:   1 _20 = _14 * _19;
 221  size:   1 _21 = _9 + _20;
 222  size:   2 *_8 = _21;
 223  size:   1 m_40 = m_23 + 1;
 224   Induction variable computation will be folded away.
 225size: 25-3, last_iteration: 2-2
 226  Loop size: 25
 227  Estimated size after unrolling: 220

For aarch64 and x86, it's ok

 198Estimating sizes for loop 4
 199 BB: 6, after_exit: 0
 200  size:   2 if (m_27 != 10)
 201   Exit condition will be eliminated in peeled copies.
 202   Exit condition will be eliminated in last copy.
 203   Constant conditional.
 204 BB: 5, after_exit: 1
 205  size:   1 _6 = n_28 * 10;
 206  size:   1 _7 = _6 + m_27;
 207  size:   0 _8 = (long unsigned int) _7;
 208  size:   1 _9 = _8 * 8;
 209  size:   1 _10 = C_39 + _9;
 210  size:   1 _11 = *_10;
 211  size:   1 _12 = k_29 * 20;
 212  size:   1 _13 = _12 + m_27;
 213  size:   0 _14 = (long unsigned int) _13;
 214  size:   1 _15 = _14 * 8;
 215  size:   1 _16 = A_35 + _15;
 216  size:   1 _17 = *_16;
 217  size:   1 _18 = n_28 * 20;
 218  size:   1 _19 = _18 + k_29;
 219  size:   0 _20 = (long unsigned int) _19;
 220  size:   1 _21 = _20 * 8;
 221  size:   1 _22 = B_37 + _21;
 222  size:   1 _23 = *_22;
 223  size:   1 _24 = _17 * _23;
 224  size:   1 _25 = _11 + _24;
 225  size:   1 *_10 = _25;
 226  size:   1 m_44 = m_27 + 1;
 227   Induction variable computation will be folded away.
 228size: 21-3, last_iteration: 2-2
 229  Loop size: 21
 230  Estimated size after unrolling: 180

>
> ------------------
>
> For the innermost loop, after completely loop unroll, it will most likely
> not be able to reduce the body size to 2/3. The current 2/3 reduction
> will make some of the larger loops completely unrolled during
> cunrolli, which will then result in them not being able to be
> vectorized. It also increases the register pressure. The patch move
> from estimated_unrolled_size to
> the 2/3 reduction at cunrolli.
>
> Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
> Ok for trunk?
>
> gcc/ChangeLog:
>
>         PR tree-optimization/112325
>         * tree-ssa-loop-ivcanon.cc (estimated_unrolled_size): Move the
>         2 / 3 loop body size reduction to ..
>         (try_unroll_loop_completely): .. here, add it for the check of
>         body size shrink, and the check of comparison against
>         param_max_completely_peeled_insns when
>         (!cunrolli ||loop->inner).
>         (canonicalize_loop_induction_variables): Add new parameter
>         cunrolli and pass down.
>         (tree_unroll_loops_completely_1): Ditto.
>         (tree_unroll_loops_completely): Ditto.
>         (canonicalize_induction_variables): Handle new parameter.
>         (pass_complete_unrolli::execute): Ditto.
>         (pass_complete_unroll::execute): Ditto.
>
> gcc/testsuite/ChangeLog:
>
>         * gcc.dg/tree-ssa/pr112325.c: New test.
>         * gcc.dg/vect/pr69783.c: Add extra option --param
>         max-completely-peeled-insns=300.
> ---
>  gcc/testsuite/gcc.dg/tree-ssa/pr112325.c | 57 ++++++++++++++++++++++++
>  gcc/testsuite/gcc.dg/vect/pr69783.c      |  2 +-
>  gcc/tree-ssa-loop-ivcanon.cc             | 45 ++++++++++---------
>  3 files changed, 83 insertions(+), 21 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/pr112325.c
>
> diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c b/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c
> new file mode 100644
> index 00000000000..14208b3e7f8
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c
> @@ -0,0 +1,57 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-tree-cunrolli-details" } */
> +
> +typedef unsigned short ggml_fp16_t;
> +static float table_f32_f16[1 << 16];
> +
> +inline static float ggml_lookup_fp16_to_fp32(ggml_fp16_t f) {
> +    unsigned short s;
> +    __builtin_memcpy(&s, &f, sizeof(unsigned short));
> +    return table_f32_f16[s];
> +}
> +
> +typedef struct {
> +    ggml_fp16_t d;
> +    ggml_fp16_t m;
> +    unsigned char qh[4];
> +    unsigned char qs[32 / 2];
> +} block_q5_1;
> +
> +typedef struct {
> +    float d;
> +    float s;
> +    char qs[32];
> +} block_q8_1;
> +
> +void ggml_vec_dot_q5_1_q8_1(const int n, float * restrict s, const void * restrict vx, const void * restrict vy) {
> +    const int qk = 32;
> +    const int nb = n / qk;
> +
> +    const block_q5_1 * restrict x = vx;
> +    const block_q8_1 * restrict y = vy;
> +
> +    float sumf = 0.0;
> +
> +    for (int i = 0; i < nb; i++) {
> +        unsigned qh;
> +        __builtin_memcpy(&qh, x[i].qh, sizeof(qh));
> +
> +        int sumi = 0;
> +
> +        for (int j = 0; j < qk/2; ++j) {
> +            const unsigned char xh_0 = ((qh >> (j + 0)) << 4) & 0x10;
> +            const unsigned char xh_1 = ((qh >> (j + 12)) ) & 0x10;
> +
> +            const int x0 = (x[i].qs[j] & 0xF) | xh_0;
> +            const int x1 = (x[i].qs[j] >> 4) | xh_1;
> +
> +            sumi += (x0 * y[i].qs[j]) + (x1 * y[i].qs[j + qk/2]);
> +        }
> +
> +        sumf += (ggml_lookup_fp16_to_fp32(x[i].d)*y[i].d)*sumi + ggml_lookup_fp16_to_fp32(x[i].m)*y[i].s;
> +    }
> +
> +    *s = sumf;
> +}
> +
> +/* { dg-final { scan-tree-dump {(?n)Not unrolling loop [1-9] \(--param max-completely-peel-times limit reached} "cunrolli"} } */
> diff --git a/gcc/testsuite/gcc.dg/vect/pr69783.c b/gcc/testsuite/gcc.dg/vect/pr69783.c
> index 5df95d0ce4e..a1f75514d72 100644
> --- a/gcc/testsuite/gcc.dg/vect/pr69783.c
> +++ b/gcc/testsuite/gcc.dg/vect/pr69783.c
> @@ -1,6 +1,6 @@
>  /* { dg-do compile } */
>  /* { dg-require-effective-target vect_float } */
> -/* { dg-additional-options "-Ofast -funroll-loops" } */
> +/* { dg-additional-options "-Ofast -funroll-loops --param max-completely-peeled-insns=300" } */
>
>  #define NXX 516
>  #define NYY 516
> diff --git a/gcc/tree-ssa-loop-ivcanon.cc b/gcc/tree-ssa-loop-ivcanon.cc
> index bf017137260..cc53eee1301 100644
> --- a/gcc/tree-ssa-loop-ivcanon.cc
> +++ b/gcc/tree-ssa-loop-ivcanon.cc
> @@ -437,11 +437,7 @@ tree_estimate_loop_size (class loop *loop, edge exit, edge edge_to_cancel,
>     It is (NUNROLL + 1) * size of loop body with taking into account
>     the fact that in last copy everything after exit conditional
>     is dead and that some instructions will be eliminated after
> -   peeling.
> -
> -   Loop body is likely going to simplify further, this is difficult
> -   to guess, we just decrease the result by 1/3.  */
> -
> +   peeling.  */
>  static unsigned HOST_WIDE_INT
>  estimated_unrolled_size (struct loop_size *size,
>                          unsigned HOST_WIDE_INT nunroll)
> @@ -453,7 +449,6 @@ estimated_unrolled_size (struct loop_size *size,
>      unr_insns = 0;
>    unr_insns += size->last_iteration - size->last_iteration_eliminated_by_peeling;
>
> -  unr_insns = unr_insns * 2 / 3;
>    if (unr_insns <= 0)
>      unr_insns = 1;
>
> @@ -734,7 +729,8 @@ try_unroll_loop_completely (class loop *loop,
>                             edge exit, tree niter, bool may_be_zero,
>                             enum unroll_level ul,
>                             HOST_WIDE_INT maxiter,
> -                           dump_user_location_t locus, bool allow_peel)
> +                           dump_user_location_t locus, bool allow_peel,
> +                           bool cunrolli)
>  {
>    unsigned HOST_WIDE_INT n_unroll = 0;
>    bool n_unroll_found = false;
> @@ -847,8 +843,9 @@ try_unroll_loop_completely (class loop *loop,
>
>           /* If the code is going to shrink, we don't need to be extra
>              cautious on guessing if the unrolling is going to be
> -            profitable.  */
> -         if (unr_insns
> +            profitable.
> +            Move from estimated_unrolled_size to unroll small loops.  */
> +         if (unr_insns * 2 / 3
>               /* If there is IV variable that will become constant, we
>                  save one instruction in the loop prologue we do not
>                  account otherwise.  */
> @@ -919,7 +916,13 @@ try_unroll_loop_completely (class loop *loop,
>                          loop->num);
>               return false;
>             }
> -         else if (unr_insns
> +         /* Move 2 / 3 reduction from estimated_unrolled_size, but don't reduce
> +            unrolled size for innermost loop when cunrolli.
> +            1) It could increase register pressure.
> +            2) Big loop after completely unroll may not be vectorized
> +            by BB vectorizer.  */
> +         else if ((cunrolli && !loop->inner
> +                   ? unr_insns : unr_insns * 2 / 3)
>                    > (unsigned) param_max_completely_peeled_insns)
>             {
>               if (dump_file && (dump_flags & TDF_DETAILS))
> @@ -1227,7 +1230,7 @@ try_peel_loop (class loop *loop,
>  static bool
>  canonicalize_loop_induction_variables (class loop *loop,
>                                        bool create_iv, enum unroll_level ul,
> -                                      bool try_eval, bool allow_peel)
> +                                      bool try_eval, bool allow_peel, bool cunrolli)
>  {
>    edge exit = NULL;
>    tree niter;
> @@ -1314,7 +1317,7 @@ canonicalize_loop_induction_variables (class loop *loop,
>
>    dump_user_location_t locus = find_loop_location (loop);
>    if (try_unroll_loop_completely (loop, exit, niter, may_be_zero, ul,
> -                                 maxiter, locus, allow_peel))
> +                                 maxiter, locus, allow_peel, cunrolli))
>      return true;
>
>    if (create_iv
> @@ -1358,7 +1361,7 @@ canonicalize_induction_variables (void)
>      {
>        changed |= canonicalize_loop_induction_variables (loop,
>                                                         true, UL_SINGLE_ITER,
> -                                                       true, false);
> +                                                       true, false, false);
>      }
>    gcc_assert (!need_ssa_update_p (cfun));
>
> @@ -1392,7 +1395,7 @@ canonicalize_induction_variables (void)
>
>  static bool
>  tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
> -                               bitmap father_bbs, class loop *loop)
> +                               bitmap father_bbs, class loop *loop, bool cunrolli)
>  {
>    class loop *loop_father;
>    bool changed = false;
> @@ -1410,7 +1413,7 @@ tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
>         if (!child_father_bbs)
>           child_father_bbs = BITMAP_ALLOC (NULL);
>         if (tree_unroll_loops_completely_1 (may_increase_size, unroll_outer,
> -                                           child_father_bbs, inner))
> +                                           child_father_bbs, inner, cunrolli))
>           {
>             bitmap_ior_into (father_bbs, child_father_bbs);
>             bitmap_clear (child_father_bbs);
> @@ -1456,7 +1459,7 @@ tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
>      ul = UL_NO_GROWTH;
>
>    if (canonicalize_loop_induction_variables
> -        (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer))
> +      (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer, cunrolli))
>      {
>        /* If we'll continue unrolling, we need to propagate constants
>          within the new basic blocks to fold away induction variable
> @@ -1485,7 +1488,8 @@ tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
>     size of the code does not increase.  */
>
>  static unsigned int
> -tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer)
> +tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer,
> +                             bool cunrolli)
>  {
>    bitmap father_bbs = BITMAP_ALLOC (NULL);
>    bool changed;
> @@ -1507,7 +1511,8 @@ tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer)
>
>        changed = tree_unroll_loops_completely_1 (may_increase_size,
>                                                 unroll_outer, father_bbs,
> -                                               current_loops->tree_root);
> +                                               current_loops->tree_root,
> +                                               cunrolli);
>        if (changed)
>         {
>           unsigned i;
> @@ -1671,7 +1676,7 @@ pass_complete_unroll::execute (function *fun)
>    if (flag_peel_loops)
>      peeled_loops = BITMAP_ALLOC (NULL);
>    unsigned int val = tree_unroll_loops_completely (flag_cunroll_grow_size,
> -                                                  true);
> +                                                  true, false);
>    if (peeled_loops)
>      {
>        BITMAP_FREE (peeled_loops);
> @@ -1727,7 +1732,7 @@ pass_complete_unrolli::execute (function *fun)
>    if (number_of_loops (fun) > 1)
>      {
>        scev_initialize ();
> -      ret = tree_unroll_loops_completely (optimize >= 3, false);
> +      ret = tree_unroll_loops_completely (optimize >= 3, false, true);
>        scev_finalize ();
>      }
>    loop_optimizer_finalize ();
> --
> 2.31.1
>
Richard Biener May 23, 2024, 11:59 a.m. UTC | #2
On Wed, May 22, 2024 at 7:07 AM liuhongt <hongtao.liu@intel.com> wrote:
>
> >> Hard to find a default value satisfying all testcases.
> >> some require loop unroll with 7 insns increment, some don't want loop
> >> unroll w/ 5 insn increment.
> >> The original 2/3 reduction happened to meet all those testcases(or the
> >> testcases are constructed based on the old 2/3).
> >> Can we define the parameter as the size of the loop, below the size we
> >> still do the reduction, so the small loop can be unrolled?
>
> >Yeah, that's also a sensible possibility.  Does it work to have a parameter
> >for the unrolled body size?  Thus, amend the existing
> >--param max-completely-peeled-insns with a --param
> >max-completely-peeled-insns-nogrowth?
>
> Update V2:
> It's still hard to find a default value for loop boday size. So I move the
> 2 / 3 reduction from estimated_unrolled_size to try_unroll_loop_completely.
> For the check of body size shrink, 2 / 3 reduction is added, so small loops
> can still be unrolled.
> For the check of comparison between body size and param_max_completely_peeled_insns,
> 2 / 3 is conditionally added for loop->inner || !cunrolli.
> Then the patch avoid gcc testsuite regression, and also prevent big inner loop
> completely unrolled at cunrolli.
>
> ------------------
>
> For the innermost loop, after completely loop unroll, it will most likely
> not be able to reduce the body size to 2/3. The current 2/3 reduction
> will make some of the larger loops completely unrolled during
> cunrolli, which will then result in them not being able to be
> vectorized. It also increases the register pressure. The patch move
> from estimated_unrolled_size to
> the 2/3 reduction at cunrolli.
>
> Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
> Ok for trunk?
>
> gcc/ChangeLog:
>
>         PR tree-optimization/112325
>         * tree-ssa-loop-ivcanon.cc (estimated_unrolled_size): Move the
>         2 / 3 loop body size reduction to ..
>         (try_unroll_loop_completely): .. here, add it for the check of
>         body size shrink, and the check of comparison against
>         param_max_completely_peeled_insns when
>         (!cunrolli ||loop->inner).
>         (canonicalize_loop_induction_variables): Add new parameter
>         cunrolli and pass down.
>         (tree_unroll_loops_completely_1): Ditto.
>         (tree_unroll_loops_completely): Ditto.
>         (canonicalize_induction_variables): Handle new parameter.
>         (pass_complete_unrolli::execute): Ditto.
>         (pass_complete_unroll::execute): Ditto.
>
> gcc/testsuite/ChangeLog:
>
>         * gcc.dg/tree-ssa/pr112325.c: New test.
>         * gcc.dg/vect/pr69783.c: Add extra option --param
>         max-completely-peeled-insns=300.
> ---
>  gcc/testsuite/gcc.dg/tree-ssa/pr112325.c | 57 ++++++++++++++++++++++++
>  gcc/testsuite/gcc.dg/vect/pr69783.c      |  2 +-
>  gcc/tree-ssa-loop-ivcanon.cc             | 45 ++++++++++---------
>  3 files changed, 83 insertions(+), 21 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/pr112325.c
>
> diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c b/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c
> new file mode 100644
> index 00000000000..14208b3e7f8
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c
> @@ -0,0 +1,57 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-tree-cunrolli-details" } */
> +
> +typedef unsigned short ggml_fp16_t;
> +static float table_f32_f16[1 << 16];
> +
> +inline static float ggml_lookup_fp16_to_fp32(ggml_fp16_t f) {
> +    unsigned short s;
> +    __builtin_memcpy(&s, &f, sizeof(unsigned short));
> +    return table_f32_f16[s];
> +}
> +
> +typedef struct {
> +    ggml_fp16_t d;
> +    ggml_fp16_t m;
> +    unsigned char qh[4];
> +    unsigned char qs[32 / 2];
> +} block_q5_1;
> +
> +typedef struct {
> +    float d;
> +    float s;
> +    char qs[32];
> +} block_q8_1;
> +
> +void ggml_vec_dot_q5_1_q8_1(const int n, float * restrict s, const void * restrict vx, const void * restrict vy) {
> +    const int qk = 32;
> +    const int nb = n / qk;
> +
> +    const block_q5_1 * restrict x = vx;
> +    const block_q8_1 * restrict y = vy;
> +
> +    float sumf = 0.0;
> +
> +    for (int i = 0; i < nb; i++) {
> +        unsigned qh;
> +        __builtin_memcpy(&qh, x[i].qh, sizeof(qh));
> +
> +        int sumi = 0;
> +
> +        for (int j = 0; j < qk/2; ++j) {
> +            const unsigned char xh_0 = ((qh >> (j + 0)) << 4) & 0x10;
> +            const unsigned char xh_1 = ((qh >> (j + 12)) ) & 0x10;
> +
> +            const int x0 = (x[i].qs[j] & 0xF) | xh_0;
> +            const int x1 = (x[i].qs[j] >> 4) | xh_1;
> +
> +            sumi += (x0 * y[i].qs[j]) + (x1 * y[i].qs[j + qk/2]);
> +        }
> +
> +        sumf += (ggml_lookup_fp16_to_fp32(x[i].d)*y[i].d)*sumi + ggml_lookup_fp16_to_fp32(x[i].m)*y[i].s;
> +    }
> +
> +    *s = sumf;
> +}
> +
> +/* { dg-final { scan-tree-dump {(?n)Not unrolling loop [1-9] \(--param max-completely-peel-times limit reached} "cunrolli"} } */

Since this was about vectorization can you instead add a testcase to
gcc.dg/vect/ and check for
vectorization to happen?

> diff --git a/gcc/testsuite/gcc.dg/vect/pr69783.c b/gcc/testsuite/gcc.dg/vect/pr69783.c
> index 5df95d0ce4e..a1f75514d72 100644
> --- a/gcc/testsuite/gcc.dg/vect/pr69783.c
> +++ b/gcc/testsuite/gcc.dg/vect/pr69783.c
> @@ -1,6 +1,6 @@
>  /* { dg-do compile } */
>  /* { dg-require-effective-target vect_float } */
> -/* { dg-additional-options "-Ofast -funroll-loops" } */
> +/* { dg-additional-options "-Ofast -funroll-loops --param max-completely-peeled-insns=300" } */

It _looks_ like this was maybe also vectorizer related?  Can you
double-check the PR?
We don't seem to check for whether we vectorize, does this change with
the default
--param max-completely-peeled-insns?

I'd rather have a #pragma GCC unroll before the loop we need unrolled than an
adjusted --param max-completely-peeled-insns.  But if we just trade
one vectorized
loop for another I'm not so sure about the patch.

>  #define NXX 516
>  #define NYY 516
> diff --git a/gcc/tree-ssa-loop-ivcanon.cc b/gcc/tree-ssa-loop-ivcanon.cc
> index bf017137260..cc53eee1301 100644
> --- a/gcc/tree-ssa-loop-ivcanon.cc
> +++ b/gcc/tree-ssa-loop-ivcanon.cc
> @@ -437,11 +437,7 @@ tree_estimate_loop_size (class loop *loop, edge exit, edge edge_to_cancel,
>     It is (NUNROLL + 1) * size of loop body with taking into account
>     the fact that in last copy everything after exit conditional
>     is dead and that some instructions will be eliminated after
> -   peeling.
> -
> -   Loop body is likely going to simplify further, this is difficult
> -   to guess, we just decrease the result by 1/3.  */
> -
> +   peeling.  */
>  static unsigned HOST_WIDE_INT
>  estimated_unrolled_size (struct loop_size *size,
>                          unsigned HOST_WIDE_INT nunroll)
> @@ -453,7 +449,6 @@ estimated_unrolled_size (struct loop_size *size,
>      unr_insns = 0;
>    unr_insns += size->last_iteration - size->last_iteration_eliminated_by_peeling;
>
> -  unr_insns = unr_insns * 2 / 3;
>    if (unr_insns <= 0)
>      unr_insns = 1;

I believe the if (unr_insn <= 0) check can go as well.

> @@ -734,7 +729,8 @@ try_unroll_loop_completely (class loop *loop,
>                             edge exit, tree niter, bool may_be_zero,
>                             enum unroll_level ul,
>                             HOST_WIDE_INT maxiter,
> -                           dump_user_location_t locus, bool allow_peel)
> +                           dump_user_location_t locus, bool allow_peel,
> +                           bool cunrolli)
>  {
>    unsigned HOST_WIDE_INT n_unroll = 0;
>    bool n_unroll_found = false;
> @@ -847,8 +843,9 @@ try_unroll_loop_completely (class loop *loop,
>
>           /* If the code is going to shrink, we don't need to be extra
>              cautious on guessing if the unrolling is going to be
> -            profitable.  */
> -         if (unr_insns
> +            profitable.
> +            Move from estimated_unrolled_size to unroll small loops.  */
> +         if (unr_insns * 2 / 3
>               /* If there is IV variable that will become constant, we
>                  save one instruction in the loop prologue we do not
>                  account otherwise.  */
> @@ -919,7 +916,13 @@ try_unroll_loop_completely (class loop *loop,
>                          loop->num);
>               return false;
>             }
> -         else if (unr_insns
> +         /* Move 2 / 3 reduction from estimated_unrolled_size, but don't reduce
> +            unrolled size for innermost loop when cunrolli.
> +            1) It could increase register pressure.
> +            2) Big loop after completely unroll may not be vectorized
> +            by BB vectorizer.  */
> +         else if ((cunrolli && !loop->inner
> +                   ? unr_insns : unr_insns * 2 / 3)
>                    > (unsigned) param_max_completely_peeled_insns)
>             {
>               if (dump_file && (dump_flags & TDF_DETAILS))
> @@ -1227,7 +1230,7 @@ try_peel_loop (class loop *loop,
>  static bool
>  canonicalize_loop_induction_variables (class loop *loop,
>                                        bool create_iv, enum unroll_level ul,
> -                                      bool try_eval, bool allow_peel)
> +                                      bool try_eval, bool allow_peel, bool cunrolli)
>  {
>    edge exit = NULL;
>    tree niter;
> @@ -1314,7 +1317,7 @@ canonicalize_loop_induction_variables (class loop *loop,
>
>    dump_user_location_t locus = find_loop_location (loop);
>    if (try_unroll_loop_completely (loop, exit, niter, may_be_zero, ul,
> -                                 maxiter, locus, allow_peel))
> +                                 maxiter, locus, allow_peel, cunrolli))
>      return true;
>
>    if (create_iv
> @@ -1358,7 +1361,7 @@ canonicalize_induction_variables (void)
>      {
>        changed |= canonicalize_loop_induction_variables (loop,
>                                                         true, UL_SINGLE_ITER,
> -                                                       true, false);
> +                                                       true, false, false);
>      }
>    gcc_assert (!need_ssa_update_p (cfun));
>
> @@ -1392,7 +1395,7 @@ canonicalize_induction_variables (void)
>
>  static bool
>  tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
> -                               bitmap father_bbs, class loop *loop)
> +                               bitmap father_bbs, class loop *loop, bool cunrolli)
>  {
>    class loop *loop_father;
>    bool changed = false;
> @@ -1410,7 +1413,7 @@ tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
>         if (!child_father_bbs)
>           child_father_bbs = BITMAP_ALLOC (NULL);
>         if (tree_unroll_loops_completely_1 (may_increase_size, unroll_outer,
> -                                           child_father_bbs, inner))
> +                                           child_father_bbs, inner, cunrolli))
>           {
>             bitmap_ior_into (father_bbs, child_father_bbs);
>             bitmap_clear (child_father_bbs);
> @@ -1456,7 +1459,7 @@ tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
>      ul = UL_NO_GROWTH;
>
>    if (canonicalize_loop_induction_variables
> -        (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer))
> +      (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer, cunrolli))
>      {
>        /* If we'll continue unrolling, we need to propagate constants
>          within the new basic blocks to fold away induction variable
> @@ -1485,7 +1488,8 @@ tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
>     size of the code does not increase.  */
>
>  static unsigned int
> -tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer)
> +tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer,
> +                             bool cunrolli)
>  {
>    bitmap father_bbs = BITMAP_ALLOC (NULL);
>    bool changed;
> @@ -1507,7 +1511,8 @@ tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer)
>
>        changed = tree_unroll_loops_completely_1 (may_increase_size,
>                                                 unroll_outer, father_bbs,
> -                                               current_loops->tree_root);
> +                                               current_loops->tree_root,
> +                                               cunrolli);

as said, you want to do

          curolli = false;

after the above since we are iterating and for a subsequent unrolling
of an outer loop
of an unrolled inner loop we _do_ want to apply the 2/3 reduction
since there's likely
inter-loop redundancies exposed (as happens in SPEC calculix for example).

Not sure if that changes any of the testsuite outcome - it possibly avoids the
gcc.dg/vect/pr69783.c FAIL?

Not sure about the arm fallout.

Richard.

>        if (changed)
>         {
>           unsigned i;
> @@ -1671,7 +1676,7 @@ pass_complete_unroll::execute (function *fun)
>    if (flag_peel_loops)
>      peeled_loops = BITMAP_ALLOC (NULL);
>    unsigned int val = tree_unroll_loops_completely (flag_cunroll_grow_size,
> -                                                  true);
> +                                                  true, false);
>    if (peeled_loops)
>      {
>        BITMAP_FREE (peeled_loops);
> @@ -1727,7 +1732,7 @@ pass_complete_unrolli::execute (function *fun)
>    if (number_of_loops (fun) > 1)
>      {
>        scev_initialize ();
> -      ret = tree_unroll_loops_completely (optimize >= 3, false);
> +      ret = tree_unroll_loops_completely (optimize >= 3, false, true);
>        scev_finalize ();
>      }
>    loop_optimizer_finalize ();
> --
> 2.31.1
>
diff mbox series

Patch

diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c b/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c
new file mode 100644
index 00000000000..14208b3e7f8
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/pr112325.c
@@ -0,0 +1,57 @@ 
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-cunrolli-details" } */
+
+typedef unsigned short ggml_fp16_t;
+static float table_f32_f16[1 << 16];
+
+inline static float ggml_lookup_fp16_to_fp32(ggml_fp16_t f) {
+    unsigned short s;
+    __builtin_memcpy(&s, &f, sizeof(unsigned short));
+    return table_f32_f16[s];
+}
+
+typedef struct {
+    ggml_fp16_t d;
+    ggml_fp16_t m;
+    unsigned char qh[4];
+    unsigned char qs[32 / 2];
+} block_q5_1;
+
+typedef struct {
+    float d;
+    float s;
+    char qs[32];
+} block_q8_1;
+
+void ggml_vec_dot_q5_1_q8_1(const int n, float * restrict s, const void * restrict vx, const void * restrict vy) {
+    const int qk = 32;
+    const int nb = n / qk;
+
+    const block_q5_1 * restrict x = vx;
+    const block_q8_1 * restrict y = vy;
+
+    float sumf = 0.0;
+
+    for (int i = 0; i < nb; i++) {
+        unsigned qh;
+        __builtin_memcpy(&qh, x[i].qh, sizeof(qh));
+
+        int sumi = 0;
+
+        for (int j = 0; j < qk/2; ++j) {
+            const unsigned char xh_0 = ((qh >> (j + 0)) << 4) & 0x10;
+            const unsigned char xh_1 = ((qh >> (j + 12)) ) & 0x10;
+
+            const int x0 = (x[i].qs[j] & 0xF) | xh_0;
+            const int x1 = (x[i].qs[j] >> 4) | xh_1;
+
+            sumi += (x0 * y[i].qs[j]) + (x1 * y[i].qs[j + qk/2]);
+        }
+
+        sumf += (ggml_lookup_fp16_to_fp32(x[i].d)*y[i].d)*sumi + ggml_lookup_fp16_to_fp32(x[i].m)*y[i].s;
+    }
+
+    *s = sumf;
+}
+
+/* { dg-final { scan-tree-dump {(?n)Not unrolling loop [1-9] \(--param max-completely-peel-times limit reached} "cunrolli"} } */
diff --git a/gcc/testsuite/gcc.dg/vect/pr69783.c b/gcc/testsuite/gcc.dg/vect/pr69783.c
index 5df95d0ce4e..a1f75514d72 100644
--- a/gcc/testsuite/gcc.dg/vect/pr69783.c
+++ b/gcc/testsuite/gcc.dg/vect/pr69783.c
@@ -1,6 +1,6 @@ 
 /* { dg-do compile } */
 /* { dg-require-effective-target vect_float } */
-/* { dg-additional-options "-Ofast -funroll-loops" } */
+/* { dg-additional-options "-Ofast -funroll-loops --param max-completely-peeled-insns=300" } */
 
 #define NXX 516
 #define NYY 516
diff --git a/gcc/tree-ssa-loop-ivcanon.cc b/gcc/tree-ssa-loop-ivcanon.cc
index bf017137260..cc53eee1301 100644
--- a/gcc/tree-ssa-loop-ivcanon.cc
+++ b/gcc/tree-ssa-loop-ivcanon.cc
@@ -437,11 +437,7 @@  tree_estimate_loop_size (class loop *loop, edge exit, edge edge_to_cancel,
    It is (NUNROLL + 1) * size of loop body with taking into account
    the fact that in last copy everything after exit conditional
    is dead and that some instructions will be eliminated after
-   peeling.
-
-   Loop body is likely going to simplify further, this is difficult
-   to guess, we just decrease the result by 1/3.  */
-
+   peeling.  */
 static unsigned HOST_WIDE_INT
 estimated_unrolled_size (struct loop_size *size,
 			 unsigned HOST_WIDE_INT nunroll)
@@ -453,7 +449,6 @@  estimated_unrolled_size (struct loop_size *size,
     unr_insns = 0;
   unr_insns += size->last_iteration - size->last_iteration_eliminated_by_peeling;
 
-  unr_insns = unr_insns * 2 / 3;
   if (unr_insns <= 0)
     unr_insns = 1;
 
@@ -734,7 +729,8 @@  try_unroll_loop_completely (class loop *loop,
 			    edge exit, tree niter, bool may_be_zero,
 			    enum unroll_level ul,
 			    HOST_WIDE_INT maxiter,
-			    dump_user_location_t locus, bool allow_peel)
+			    dump_user_location_t locus, bool allow_peel,
+			    bool cunrolli)
 {
   unsigned HOST_WIDE_INT n_unroll = 0;
   bool n_unroll_found = false;
@@ -847,8 +843,9 @@  try_unroll_loop_completely (class loop *loop,
 
 	  /* If the code is going to shrink, we don't need to be extra
 	     cautious on guessing if the unrolling is going to be
-	     profitable.  */
-	  if (unr_insns
+	     profitable.
+	     Move from estimated_unrolled_size to unroll small loops.  */
+	  if (unr_insns * 2 / 3
 	      /* If there is IV variable that will become constant, we
 		 save one instruction in the loop prologue we do not
 		 account otherwise.  */
@@ -919,7 +916,13 @@  try_unroll_loop_completely (class loop *loop,
 			 loop->num);
 	      return false;
 	    }
-	  else if (unr_insns
+	  /* Move 2 / 3 reduction from estimated_unrolled_size, but don't reduce
+	     unrolled size for innermost loop when cunrolli.
+	     1) It could increase register pressure.
+	     2) Big loop after completely unroll may not be vectorized
+	     by BB vectorizer.  */
+	  else if ((cunrolli && !loop->inner
+		    ? unr_insns : unr_insns * 2 / 3)
 		   > (unsigned) param_max_completely_peeled_insns)
 	    {
 	      if (dump_file && (dump_flags & TDF_DETAILS))
@@ -1227,7 +1230,7 @@  try_peel_loop (class loop *loop,
 static bool
 canonicalize_loop_induction_variables (class loop *loop,
 				       bool create_iv, enum unroll_level ul,
-				       bool try_eval, bool allow_peel)
+				       bool try_eval, bool allow_peel, bool cunrolli)
 {
   edge exit = NULL;
   tree niter;
@@ -1314,7 +1317,7 @@  canonicalize_loop_induction_variables (class loop *loop,
 
   dump_user_location_t locus = find_loop_location (loop);
   if (try_unroll_loop_completely (loop, exit, niter, may_be_zero, ul,
-				  maxiter, locus, allow_peel))
+				  maxiter, locus, allow_peel, cunrolli))
     return true;
 
   if (create_iv
@@ -1358,7 +1361,7 @@  canonicalize_induction_variables (void)
     {
       changed |= canonicalize_loop_induction_variables (loop,
 							true, UL_SINGLE_ITER,
-							true, false);
+							true, false, false);
     }
   gcc_assert (!need_ssa_update_p (cfun));
 
@@ -1392,7 +1395,7 @@  canonicalize_induction_variables (void)
 
 static bool
 tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
-				bitmap father_bbs, class loop *loop)
+				bitmap father_bbs, class loop *loop, bool cunrolli)
 {
   class loop *loop_father;
   bool changed = false;
@@ -1410,7 +1413,7 @@  tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
 	if (!child_father_bbs)
 	  child_father_bbs = BITMAP_ALLOC (NULL);
 	if (tree_unroll_loops_completely_1 (may_increase_size, unroll_outer,
-					    child_father_bbs, inner))
+					    child_father_bbs, inner, cunrolli))
 	  {
 	    bitmap_ior_into (father_bbs, child_father_bbs);
 	    bitmap_clear (child_father_bbs);
@@ -1456,7 +1459,7 @@  tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
     ul = UL_NO_GROWTH;
 
   if (canonicalize_loop_induction_variables
-        (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer))
+      (loop, false, ul, !flag_tree_loop_ivcanon, unroll_outer, cunrolli))
     {
       /* If we'll continue unrolling, we need to propagate constants
 	 within the new basic blocks to fold away induction variable
@@ -1485,7 +1488,8 @@  tree_unroll_loops_completely_1 (bool may_increase_size, bool unroll_outer,
    size of the code does not increase.  */
 
 static unsigned int
-tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer)
+tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer,
+			      bool cunrolli)
 {
   bitmap father_bbs = BITMAP_ALLOC (NULL);
   bool changed;
@@ -1507,7 +1511,8 @@  tree_unroll_loops_completely (bool may_increase_size, bool unroll_outer)
 
       changed = tree_unroll_loops_completely_1 (may_increase_size,
 						unroll_outer, father_bbs,
-						current_loops->tree_root);
+						current_loops->tree_root,
+						cunrolli);
       if (changed)
 	{
 	  unsigned i;
@@ -1671,7 +1676,7 @@  pass_complete_unroll::execute (function *fun)
   if (flag_peel_loops)
     peeled_loops = BITMAP_ALLOC (NULL);
   unsigned int val = tree_unroll_loops_completely (flag_cunroll_grow_size, 
-						   true);
+						   true, false);
   if (peeled_loops)
     {
       BITMAP_FREE (peeled_loops);
@@ -1727,7 +1732,7 @@  pass_complete_unrolli::execute (function *fun)
   if (number_of_loops (fun) > 1)
     {
       scev_initialize ();
-      ret = tree_unroll_loops_completely (optimize >= 3, false);
+      ret = tree_unroll_loops_completely (optimize >= 3, false, true);
       scev_finalize ();
     }
   loop_optimizer_finalize ();