From patchwork Tue Jun 28 11:59:51 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Martin Jambor X-Patchwork-Id: 102374 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id E8C55B6F6C for ; Tue, 28 Jun 2011 22:00:24 +1000 (EST) Received: (qmail 7908 invoked by alias); 28 Jun 2011 12:00:22 -0000 Received: (qmail 7895 invoked by uid 22791); 28 Jun 2011 12:00:21 -0000 X-SWARE-Spam-Status: No, hits=-3.5 required=5.0 tests=AWL,BAYES_00 X-Spam-Check-By: sourceware.org Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 28 Jun 2011 11:59:53 +0000 Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.221.2]) by mx2.suse.de (Postfix) with ESMTP id 87D26867E2 for ; Tue, 28 Jun 2011 13:59:51 +0200 (CEST) Date: Tue, 28 Jun 2011 13:59:51 +0200 From: Martin Jambor To: Richard Guenther Cc: GCC Patches Subject: Re: [PATCH, PR 49094] Refrain from creating misaligned accesses in SRA Message-ID: <20110628115950.GA3645@virgil.arch.suse.de> Mail-Followup-To: Richard Guenther , GCC Patches References: <20110626182044.GA23403@virgil.arch.suse.de> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Hi, On Mon, Jun 27, 2011 at 03:18:01PM +0200, Richard Guenther wrote: > On Sun, 26 Jun 2011, Martin Jambor wrote: > > > Hi, > > > > under some circumstances involving user specified alignment and/or > > packed attributes, SRA can create a misaligned MEM_REF. As the > > testcase demonstrates, it is not enough to not consider variables with > > these type attributes, mainly because we might attempt to load/store > > the scalar replacements from/to right/left sides of original aggregate > > assignments which might be misaligned. > > ... > > I think you want something like > > static bool > tree_non_mode_aligned_mem_p (tree exp) > { > enum machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); > unsigned int align; > > if (mode == BLKmode > || !STRICT_ALIGNMENT) > return false; > > align = get_object_alignment (exp, BIGGEST_ALIGNMENT); > if (GET_MODE_ALIGNMENT (mode) > align) > return true; > > return false; > } > > as for STRICT_ALIGNMENT targets we assume that the loads/stores SRA > inserts have the alignment of the mode. > I admit to be surprised this works, I did not know aggregates could have non-BLK modes. Anyway, it does, and so I intend to commit the following this evening, after a testsuite run on sparc64. Please stop me if the previous message was not a pre-approval of sorts. Thanks a lot, Martin 2011-06-28 Martin Jambor PR tree-optimization/49094 * tree-sra.c (tree_non_mode_aligned_mem_p): New function. (build_accesses_from_assign): Use it. * testsuite/gcc.dg/tree-ssa/pr49094.c: New test. Index: src/gcc/tree-sra.c =================================================================== --- src.orig/gcc/tree-sra.c +++ src/gcc/tree-sra.c @@ -1050,6 +1050,25 @@ disqualify_ops_if_throwing_stmt (gimple return false; } +/* Return true iff type of EXP is not sufficiently aligned. */ + +static bool +tree_non_mode_aligned_mem_p (tree exp) +{ + enum machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + unsigned int align; + + if (mode == BLKmode + || !STRICT_ALIGNMENT) + return false; + + align = get_object_alignment (exp, BIGGEST_ALIGNMENT); + if (GET_MODE_ALIGNMENT (mode) > align) + return true; + + return false; +} + /* Scan expressions occuring in STMT, create access structures for all accesses to candidates for scalarization and remove those candidates which occur in statements or expressions that prevent them from being split apart. Return @@ -1074,7 +1093,10 @@ build_accesses_from_assign (gimple stmt) lacc = build_access_from_expr_1 (lhs, stmt, true); if (lacc) - lacc->grp_assignment_write = 1; + { + lacc->grp_assignment_write = 1; + lacc->grp_unscalarizable_region |= tree_non_mode_aligned_mem_p (rhs); + } if (racc) { @@ -1082,6 +1104,7 @@ build_accesses_from_assign (gimple stmt) if (should_scalarize_away_bitmap && !gimple_has_volatile_ops (stmt) && !is_gimple_reg_type (racc->type)) bitmap_set_bit (should_scalarize_away_bitmap, DECL_UID (racc->base)); + racc->grp_unscalarizable_region |= tree_non_mode_aligned_mem_p (lhs); } if (lacc && racc Index: src/gcc/testsuite/gcc.dg/tree-ssa/pr49094.c =================================================================== --- /dev/null +++ src/gcc/testsuite/gcc.dg/tree-ssa/pr49094.c @@ -0,0 +1,38 @@ +/* { dg-do run } */ +/* { dg-options "-O" } */ + +struct in_addr { + unsigned int s_addr; +}; + +struct ip { + unsigned char ip_p; + unsigned short ip_sum; + struct in_addr ip_src,ip_dst; +} __attribute__ ((aligned(1), packed)); + +struct ip ip_fw_fwd_addr; + +int test_alignment( char *m ) +{ + struct ip *ip = (struct ip *) m; + struct in_addr pkt_dst; + pkt_dst = ip->ip_dst ; + if( pkt_dst.s_addr == 0 ) + return 1; + else + return 0; +} + +int __attribute__ ((noinline, noclone)) +intermediary (char *p) +{ + return test_alignment (p); +} + +int +main (int argc, char *argv[]) +{ + ip_fw_fwd_addr.ip_dst.s_addr = 1; + return intermediary ((void *) &ip_fw_fwd_addr); +}