From patchwork Tue Dec 13 11:46:04 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Preudhomme X-Patchwork-Id: 705382 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3tdHwf3CBwz9t2T for ; Tue, 13 Dec 2016 22:46:26 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="F7zYmHcw"; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:from:message-id:date:mime-version :in-reply-to:content-type; q=dns; s=default; b=AY0lvsmqWIUB+H/Gp ajlJGmms59ufKf3yonT1q7z7335RtmmeMdKXsyeztun60x81dJqJl8ZgEdymttGY 5EVo0Q4TyxNIWhJnh/AsR0wLuz3pZ/HyxBjRLxdN4BB4ZqfLWqZ6L/+tKIwpRF3Q zggeN+pOuDEE23AZqt1X3KRZ3E= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:from:message-id:date:mime-version :in-reply-to:content-type; s=default; bh=enG68smFSMYG2yKzsj15tPa xOjY=; b=F7zYmHcwX0fXHCjxH8pLSI6gseW9czZ+zwjS+DHmnM/t3VqKqqvehZ2 TXQFZzsCUJ4dmdpMba8fijh6UNLTrmalCkpnyrQ63cf+AU3eOXyX14wVbHQDCjgI UPTA/yHymjqImW1owBQ/jsfSPca38VKjPcyoySE9NcsaqWtMxJXA= Received: (qmail 123800 invoked by alias); 13 Dec 2016 11:46:18 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 123786 invoked by uid 89); 13 Dec 2016 11:46:17 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.7 required=5.0 tests=BAYES_00, KAM_LAZY_DOMAIN_SECURITY, KAM_LOTSOFHASH, RP_MATCHES_RCVD autolearn=ham version=3.3.2 spammy=upfront, H*f:sk:146a317, H*i:sk:146a317, H*MI:sk:146a317 X-HELO: foss.arm.com Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 13 Dec 2016 11:46:07 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E81BF; Tue, 13 Dec 2016 03:46:06 -0800 (PST) Received: from [10.2.206.52] (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7CDF13F220; Tue, 13 Dec 2016 03:46:05 -0800 (PST) Subject: Re: [PATCH, GCC, gcc-5/6-branch] Fix PR77673: bswap loads passed end of object To: Richard Biener , Jakub Jelinek , "gcc-patches@gcc.gnu.org" References: <146a317d-a81f-dbd5-f041-1b116222744b@foss.arm.com> From: Thomas Preudhomme Message-ID: Date: Tue, 13 Dec 2016 11:46:04 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0 MIME-Version: 1.0 In-Reply-To: <146a317d-a81f-dbd5-f041-1b116222744b@foss.arm.com> X-IsSubscribed: yes Sorry, ignore the first attachment (2_combine_profile_multilib.patch). i always miss that Thunderbird selects the first file in the in-review folder upfront. Best regards, Thomas On 13/12/16 11:43, Thomas Preudhomme wrote: > Hi, > > Fix in r242869 for PR77673 (bswap loads after end of object) applies cleanly on > GCC 6 and with trivial fix for GCC 5 (gimple * in GCC 6 -> gimple in GCC 5). The > backports also bootstrap fine on x86_64-linux-gnu and testsuite shows no > regression. > > ChangeLog entries are as follow: > > > *** gcc/ChangeLog *** > > 2016-12-12 Thomas Preud'homme > > Backport from mainline > 2016-11-25 Thomas Preud'homme > > PR tree-optimization/77673 > * tree-ssa-math-opts.c (struct symbolic_number): Add new src field. > (init_symbolic_number): Initialize src field from src parameter. > (perform_symbolic_merge): Select most dominated statement as the > source statement. Set src field of resulting n structure from the > input src with the lowest address. > (find_bswap_or_nop): Rename source_stmt into ins_stmt. > (bswap_replace): Rename src_stmt into ins_stmt. Initially get source > of load from src field rather than insertion statement. Cancel > optimization if statement analyzed is not dominated by the insertion > statement. > (pass_optimize_bswap::execute): Rename src_stmt to ins_stmt. Compute > dominance information. > > > *** gcc/testsuite/ChangeLog *** > > 2016-12-12 Thomas Preud'homme > > Backport from mainline > 2016-11-25 Thomas Preud'homme > > PR tree-optimization/77673 > * gcc.dg/pr77673.c: New test. > > > Is this ok for gcc-5-branch and gcc-6-branch? > > Best regards, > > Thomas diff --git a/gcc/testsuite/gcc.dg/pr77673.c b/gcc/testsuite/gcc.dg/pr77673.c new file mode 100644 index 0000000000000000000000000000000000000000..e03bcb9284d1e5a0e496cfa547fdbab630bec04f --- /dev/null +++ b/gcc/testsuite/gcc.dg/pr77673.c @@ -0,0 +1,19 @@ +/* { dg-do compile { target fpic } } */ +/* { dg-require-effective-target bswap32 } */ +/* { dg-options "-O2 -fPIC -fdump-tree-bswap" } */ +/* { dg-additional-options "-march=z900" { target s390*-*-* } } */ + +void mach_parse_compressed(unsigned char* ptr, unsigned long int* val) +{ + if (ptr[0] < 0xC0U) { + *val = ptr[0] + ptr[1]; + return; + } + + *val = ((unsigned long int)(ptr[0]) << 24) + | ((unsigned long int)(ptr[1]) << 16) + | ((unsigned long int)(ptr[2]) << 8) + | ptr[3]; +} + +/* { dg-final { scan-tree-dump-not "load_dst_\\d+ =.* if \\(" "bswap" } } */ diff --git a/gcc/tree-ssa-math-opts.c b/gcc/tree-ssa-math-opts.c index 735b7c67c31df0c8317544346396fb8b15315879..ac15e8179a3257d1e190086c8089bc85ed8552bf 100644 --- a/gcc/tree-ssa-math-opts.c +++ b/gcc/tree-ssa-math-opts.c @@ -1951,6 +1951,7 @@ struct symbolic_number { tree base_addr; tree offset; HOST_WIDE_INT bytepos; + tree src; tree alias_set; tree vuse; unsigned HOST_WIDE_INT range; @@ -2052,6 +2053,7 @@ init_symbolic_number (struct symbolic_number *n, tree src) int size; n->base_addr = n->offset = n->alias_set = n->vuse = NULL_TREE; + n->src = src; /* Set up the symbolic number N by setting each byte to a value between 1 and the byte size of rhs1. The highest order byte is set to n->size and the @@ -2167,6 +2169,7 @@ perform_symbolic_merge (gimple *source_stmt1, struct symbolic_number *n1, uint64_t inc; HOST_WIDE_INT start_sub, end_sub, end1, end2, end; struct symbolic_number *toinc_n_ptr, *n_end; + basic_block bb1, bb2; if (!n1->base_addr || !n2->base_addr || !operand_equal_p (n1->base_addr, n2->base_addr, 0)) @@ -2180,15 +2183,20 @@ perform_symbolic_merge (gimple *source_stmt1, struct symbolic_number *n1, { n_start = n1; start_sub = n2->bytepos - n1->bytepos; - source_stmt = source_stmt1; } else { n_start = n2; start_sub = n1->bytepos - n2->bytepos; - source_stmt = source_stmt2; } + bb1 = gimple_bb (source_stmt1); + bb2 = gimple_bb (source_stmt2); + if (dominated_by_p (CDI_DOMINATORS, bb1, bb2)) + source_stmt = source_stmt1; + else + source_stmt = source_stmt2; + /* Find the highest address at which a load is performed and compute related info. */ end1 = n1->bytepos + (n1->range - 1); @@ -2245,6 +2253,7 @@ perform_symbolic_merge (gimple *source_stmt1, struct symbolic_number *n1, n->vuse = n_start->vuse; n->base_addr = n_start->base_addr; n->offset = n_start->offset; + n->src = n_start->src; n->bytepos = n_start->bytepos; n->type = n_start->type; size = TYPE_PRECISION (n->type) / BITS_PER_UNIT; @@ -2455,7 +2464,7 @@ find_bswap_or_nop (gimple *stmt, struct symbolic_number *n, bool *bswap) uint64_t cmpxchg = CMPXCHG; uint64_t cmpnop = CMPNOP; - gimple *source_stmt; + gimple *ins_stmt; int limit; /* The last parameter determines the depth search limit. It usually @@ -2465,9 +2474,9 @@ find_bswap_or_nop (gimple *stmt, struct symbolic_number *n, bool *bswap) in libgcc, and for initial shift/and operation of the src operand. */ limit = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (gimple_expr_type (stmt))); limit += 1 + (int) ceil_log2 ((unsigned HOST_WIDE_INT) limit); - source_stmt = find_bswap_or_nop_1 (stmt, n, limit); + ins_stmt = find_bswap_or_nop_1 (stmt, n, limit); - if (!source_stmt) + if (!ins_stmt) return NULL; /* Find real size of result (highest non-zero byte). */ @@ -2509,7 +2518,7 @@ find_bswap_or_nop (gimple *stmt, struct symbolic_number *n, bool *bswap) return NULL; n->range *= BITS_PER_UNIT; - return source_stmt; + return ins_stmt; } namespace { @@ -2558,7 +2567,7 @@ public: changing of basic block. */ static bool -bswap_replace (gimple *cur_stmt, gimple *src_stmt, tree fndecl, +bswap_replace (gimple *cur_stmt, gimple *ins_stmt, tree fndecl, tree bswap_type, tree load_type, struct symbolic_number *n, bool bswap) { @@ -2567,18 +2576,24 @@ bswap_replace (gimple *cur_stmt, gimple *src_stmt, tree fndecl, gimple *bswap_stmt; gsi = gsi_for_stmt (cur_stmt); - src = gimple_assign_rhs1 (src_stmt); + src = n->src; tgt = gimple_assign_lhs (cur_stmt); /* Need to load the value from memory first. */ if (n->base_addr) { - gimple_stmt_iterator gsi_ins = gsi_for_stmt (src_stmt); + gimple_stmt_iterator gsi_ins = gsi_for_stmt (ins_stmt); tree addr_expr, addr_tmp, val_expr, val_tmp; tree load_offset_ptr, aligned_load_type; gimple *addr_stmt, *load_stmt; unsigned align; HOST_WIDE_INT load_offset = 0; + basic_block ins_bb, cur_bb; + + ins_bb = gimple_bb (ins_stmt); + cur_bb = gimple_bb (cur_stmt); + if (!dominated_by_p (CDI_DOMINATORS, cur_bb, ins_bb)) + return false; align = get_object_alignment (src); /* If the new access is smaller than the original one, we need @@ -2610,7 +2625,7 @@ bswap_replace (gimple *cur_stmt, gimple *src_stmt, tree fndecl, /* Move cur_stmt just before one of the load of the original to ensure it has the same VUSE. See PR61517 for what could go wrong. */ - if (gimple_bb (cur_stmt) != gimple_bb (src_stmt)) + if (gimple_bb (cur_stmt) != gimple_bb (ins_stmt)) reset_flow_sensitive_info (gimple_assign_lhs (cur_stmt)); gsi_move_before (&gsi, &gsi_ins); gsi = gsi_for_stmt (cur_stmt); @@ -2783,6 +2798,7 @@ pass_optimize_bswap::execute (function *fun) memset (&nop_stats, 0, sizeof (nop_stats)); memset (&bswap_stats, 0, sizeof (bswap_stats)); + calculate_dominance_info (CDI_DOMINATORS); FOR_EACH_BB_FN (bb, fun) { @@ -2794,7 +2810,7 @@ pass_optimize_bswap::execute (function *fun) variant wouldn't be detected. */ for (gsi = gsi_last_bb (bb); !gsi_end_p (gsi);) { - gimple *src_stmt, *cur_stmt = gsi_stmt (gsi); + gimple *ins_stmt, *cur_stmt = gsi_stmt (gsi); tree fndecl = NULL_TREE, bswap_type = NULL_TREE, load_type; enum tree_code code; struct symbolic_number n; @@ -2827,9 +2843,9 @@ pass_optimize_bswap::execute (function *fun) continue; } - src_stmt = find_bswap_or_nop (cur_stmt, &n, &bswap); + ins_stmt = find_bswap_or_nop (cur_stmt, &n, &bswap); - if (!src_stmt) + if (!ins_stmt) continue; switch (n.range) @@ -2863,7 +2879,7 @@ pass_optimize_bswap::execute (function *fun) if (bswap && !fndecl && n.range != 16) continue; - if (bswap_replace (cur_stmt, src_stmt, fndecl, bswap_type, load_type, + if (bswap_replace (cur_stmt, ins_stmt, fndecl, bswap_type, load_type, &n, bswap)) changed = true; }