From patchwork Mon Aug 2 20:37:52 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bernd Schmidt X-Patchwork-Id: 60665 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 84056B70B8 for ; Tue, 3 Aug 2010 06:38:12 +1000 (EST) Received: (qmail 365 invoked by alias); 2 Aug 2010 20:38:10 -0000 Received: (qmail 354 invoked by uid 22791); 2 Aug 2010 20:38:09 -0000 X-SWARE-Spam-Status: No, hits=-1.8 required=5.0 tests=AWL, BAYES_00, T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mail.codesourcery.com (HELO mail.codesourcery.com) (38.113.113.100) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Mon, 02 Aug 2010 20:38:03 +0000 Received: (qmail 7884 invoked from network); 2 Aug 2010 20:38:01 -0000 Received: from unknown (HELO ?84.152.249.133?) (bernds@127.0.0.2) by mail.codesourcery.com with ESMTPA; 2 Aug 2010 20:38:01 -0000 Message-ID: <4C572CA0.3040802@codesourcery.com> Date: Mon, 02 Aug 2010 22:37:52 +0200 From: Bernd Schmidt User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.7) Gecko/20100724 Thunderbird/3.1.1 MIME-Version: 1.0 To: GCC Patches CC: Richard Earnshaw Subject: Combiner fixes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org While testing a different ARM patch, I found that I was getting some code changes that I couldn't explain. It turned out to be a bug in the combiner: a CONST_INT is created without using trunc_int_for_mode, missing a sign extension, and it ends up not matching const_int_operand for that reason. While I was there, I also added some extra canonicalization that helps on ARM (and produces the optimization I was previously only seeing by accident). According to the documentation, * In combinations of `neg', `mult', `plus', and `minus', the `neg' operations (if any) will be moved inside the operations as far as possible. For instance, `(neg (mult A B))' is canonicalized as `(mult (neg A) B)', but `(plus (mult (neg A) B) C)' is canonicalized as `(minus A (mult B C))'. It doesn't say so explicitly, but I take this to mean we can and should change both (set (reg/v:SI 137 [ q ]) (plus:SI (mult:SI (neg:SI (reg/v:SI 135 [ y ])) (const_int 2)) (reg/v:SI 137 [ q ]))) and (set (reg/v:SI 137 [ q ]) (plus:SI (mult:SI (reg/v:SI 135 [ y ]) (const_int -2)) (reg/v:SI 137 [ q ]))) into (set (reg/v:SI 137 [ q ]) (minus:SI (reg/v:SI 137 [ q ]) (mult:SI (reg/v:SI 135 [ y ]) (const_int 2)))) The latter is recognized by the ARM backend. IMO the canonicalization of a shift into a mult is supposed to produce multiplications by powers of two, and those are all positive values. On ARM, this helps as follow: - rsb r3, r0, r0, lsl #30 - add r6, r6, r3, lsl #2 + sub r6, r6, r0, lsl #2 I've also added code to transform (mult (neg (...)) (const)) into multiplication by the negated constant. Bootstrapped and tested on i686-linux. There are some ARM-specific bits in here, since we forgot to mask off unwanted bits in a HOST_WIDE_INT in some places. ARM tests are in progress. Ok? Bernd * simplify-rtx.c (simplify_binary_operation_1): Change a multiply of a negated value and a constant into a multiply by the negated constant. * combine.c (make_compound_operation): Use trunc_int_for_mode when generating a MULT with constant. Canonicalize PLUS involving MULT. * config/arm/constraints.md (M): Examine only 32 bits of a HOST_WIDE_INT. * config/arm/predicates.md (power_of_two_operand): Likewise. Index: config/arm/constraints.md =================================================================== --- config/arm/constraints.md (revision 162821) +++ config/arm/constraints.md (working copy) @@ -121,7 +121,7 @@ (define_constraint "M" "In Thumb-1 state a constant that is a multiple of 4 in the range 0-1020." (and (match_code "const_int") (match_test "TARGET_32BIT ? ((ival >= 0 && ival <= 32) - || ((ival & (ival - 1)) == 0)) + || (((ival & (ival - 1)) & 0xFFFFFFFF) == 0)) : ival >= 0 && ival <= 1020 && (ival & 3) == 0"))) (define_constraint "N" Index: config/arm/predicates.md =================================================================== --- config/arm/predicates.md (revision 162821) +++ config/arm/predicates.md (working copy) @@ -289,7 +289,7 @@ (define_special_predicate "arm_reg_or_ex (define_predicate "power_of_two_operand" (match_code "const_int") { - HOST_WIDE_INT value = INTVAL (op); + unsigned HOST_WIDE_INT value = INTVAL (op) & 0xffffffff; return value != 0 && (value & (value - 1)) == 0; }) Index: simplify-rtx.c =================================================================== --- simplify-rtx.c (revision 162821) +++ simplify-rtx.c (working copy) @@ -2109,6 +2109,10 @@ simplify_binary_operation_1 (enum rtx_co if (trueop1 == constm1_rtx) return simplify_gen_unary (NEG, mode, op0, mode); + if (GET_CODE (op0) == NEG && CONST_INT_P (trueop1)) + return simplify_gen_binary (MULT, mode, XEXP (op0, 0), + simplify_gen_unary (NEG, mode, op1, mode)); + /* Maybe simplify x * 0 to 0. The reduction is not valid if x is NaN, since x * 0 is then also NaN. Nor is it valid when the mode has signed zeros, since multiplying a negative Index: combine.c =================================================================== --- combine.c (revision 162821) +++ combine.c (working copy) @@ -7110,10 +7110,46 @@ make_compound_operation (rtx x, enum rtx && INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT && INTVAL (XEXP (x, 1)) >= 0) { + HOST_WIDE_INT count = INTVAL (XEXP (x, 1)); + HOST_WIDE_INT multval = (HOST_WIDE_INT) 1 << count; + new_rtx = make_compound_operation (XEXP (x, 0), next_code); - new_rtx = gen_rtx_MULT (mode, new_rtx, - GEN_INT ((HOST_WIDE_INT) 1 - << INTVAL (XEXP (x, 1)))); + if (GET_CODE (new_rtx) == NEG) + { + new_rtx = XEXP (new_rtx, 0); + multval = -multval; + } + multval = trunc_int_for_mode (multval, mode); + new_rtx = gen_rtx_MULT (mode, new_rtx, GEN_INT (multval)); + } + break; + + case PLUS: + lhs = XEXP (x, 0); + rhs = XEXP (x, 1); + lhs = make_compound_operation (lhs, MEM); + rhs = make_compound_operation (rhs, MEM); + if (GET_CODE (lhs) == MULT && GET_CODE (XEXP (lhs, 0)) == NEG + && SCALAR_INT_MODE_P (mode)) + { + tem = simplify_gen_binary (MULT, mode, XEXP (XEXP (lhs, 0), 0), + XEXP (lhs, 1)); + new_rtx = simplify_gen_binary (MINUS, mode, rhs, tem); + } + else if (GET_CODE (lhs) == MULT + && (CONST_INT_P (XEXP (lhs, 1)) && INTVAL (XEXP (lhs, 1)) < 0)) + { + tem = simplify_gen_binary (MULT, mode, XEXP (lhs, 0), + simplify_gen_unary (NEG, mode, + XEXP (lhs, 1), + mode)); + new_rtx = simplify_gen_binary (MINUS, mode, rhs, tem); + } + else + { + SUBST (XEXP (x, 0), lhs); + SUBST (XEXP (x, 1), rhs); + goto maybe_swap; } break; @@ -7345,6 +7381,7 @@ make_compound_operation (rtx x, enum rtx SUBST (XVECEXP (x, i, j), new_rtx); } + maybe_swap: /* If this is a commutative operation, the changes to the operands may have made it noncanonical. */ if (COMMUTATIVE_ARITH_P (x)