Message ID | 53802562-2e6b-ea67-0f97-a32126702593@arm.com |
---|---|
State | New |
Headers | show |
Series | [Aarch64] Fix alignment of neon loads & stores in gimple | expand |
"Andre Vieira (lists)" <andre.simoesdiasvieira@arm.com> writes: > Hi, > > This fixes the alignment on the memory access type for neon loads & > stores in the gimple lowering. Bootstrap ubsan on aarch64 builds again > with this change. > > > 2021-10-25 Andre Vieira <andre.simoesdiasvieira@arm.com> > > gcc/ChangeLog: > > * config/aarch64/aarch64-builtins.c > (aarch64_general_gimple_fold_builtin): Fix memory access > type alignment. > > > Is this OK for trunk? > > Kind regards, > Andre > > diff --git a/gcc/config/aarch64/aarch64-builtins.c b/gcc/config/aarch64/aarch64-builtins.c > index a815e4cfbccab692ca688ba87c71b06c304abbfb..f5436baf5f8a65c340e05faa491d86a7847c37d3 100644 > --- a/gcc/config/aarch64/aarch64-builtins.c > +++ b/gcc/config/aarch64/aarch64-builtins.c > @@ -2490,12 +2490,16 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt, > gimple_seq stmts = NULL; > tree base = gimple_convert (&stmts, elt_ptr_type, > args[0]); > + /* Use element type alignment. */ > + tree access_type > + = build_aligned_type (simd_type.itype, > + TYPE_ALIGN (TREE_TYPE (simd_type.itype))); Guess this is slightly simpler as TYPE_ALIGN (simd_type.eltype) but either's fine. OK with or without that change. Thanks, Richard > if (stmts) > gsi_insert_seq_before (gsi, stmts, GSI_SAME_STMT); > new_stmt > = gimple_build_assign (gimple_get_lhs (stmt), > fold_build2 (MEM_REF, > - simd_type.itype, > + access_type, > base, zero)); > } > break; > @@ -2512,13 +2516,16 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt, > gimple_seq stmts = NULL; > tree base = gimple_convert (&stmts, elt_ptr_type, > args[0]); > + /* Use element type alignment. */ > + tree access_type > + = build_aligned_type (simd_type.itype, > + TYPE_ALIGN (TREE_TYPE (simd_type.itype))); > if (stmts) > gsi_insert_seq_before (gsi, stmts, GSI_SAME_STMT); > new_stmt > - = gimple_build_assign (fold_build2 (MEM_REF, > - simd_type.itype, > - base, > - zero), args[1]); > + = gimple_build_assign (fold_build2 (MEM_REF, access_type, > + base, zero), > + args[1]); > } > break; >
diff --git a/gcc/config/aarch64/aarch64-builtins.c b/gcc/config/aarch64/aarch64-builtins.c index a815e4cfbccab692ca688ba87c71b06c304abbfb..f5436baf5f8a65c340e05faa491d86a7847c37d3 100644 --- a/gcc/config/aarch64/aarch64-builtins.c +++ b/gcc/config/aarch64/aarch64-builtins.c @@ -2490,12 +2490,16 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt, gimple_seq stmts = NULL; tree base = gimple_convert (&stmts, elt_ptr_type, args[0]); + /* Use element type alignment. */ + tree access_type + = build_aligned_type (simd_type.itype, + TYPE_ALIGN (TREE_TYPE (simd_type.itype))); if (stmts) gsi_insert_seq_before (gsi, stmts, GSI_SAME_STMT); new_stmt = gimple_build_assign (gimple_get_lhs (stmt), fold_build2 (MEM_REF, - simd_type.itype, + access_type, base, zero)); } break; @@ -2512,13 +2516,16 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt, gimple_seq stmts = NULL; tree base = gimple_convert (&stmts, elt_ptr_type, args[0]); + /* Use element type alignment. */ + tree access_type + = build_aligned_type (simd_type.itype, + TYPE_ALIGN (TREE_TYPE (simd_type.itype))); if (stmts) gsi_insert_seq_before (gsi, stmts, GSI_SAME_STMT); new_stmt - = gimple_build_assign (fold_build2 (MEM_REF, - simd_type.itype, - base, - zero), args[1]); + = gimple_build_assign (fold_build2 (MEM_REF, access_type, + base, zero), + args[1]); } break;