From patchwork Wed Oct 11 01:29:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiufu Guo X-Patchwork-Id: 1846192 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=VYaesbPT; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4S4wDn4h0Dz1yq7 for ; Wed, 11 Oct 2023 12:30:00 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 7605D385840A for ; Wed, 11 Oct 2023 01:29:58 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by sourceware.org (Postfix) with ESMTPS id 7872D3858C41; Wed, 11 Oct 2023 01:29:43 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 7872D3858C41 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=linux.ibm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linux.ibm.com Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39B1Q0iK014506; Wed, 11 Oct 2023 01:29:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=pp1; bh=P7C3Idc7lkk739hdxYdIIUwlhfPmyw3mDmQV4eiOlkk=; b=VYaesbPTGDYgAjSrwTQI35vCIn+CRLoyUoBsTjDPM1s2OTBDdTJUihOxRZjbAWVbPrE7 O1Ab2O5Z2ruGFPy7eP3Pm1Ct8v1JeHCuLNNDXCIOZA6x5dLvUSLwTh4hbG85hMpvtHU2 XsR49ym9CGB6df3ZlB2ChZJHMt5XgPwAXOAzR/KZUklk+kr5aCr34j3Hx+RZbpXvkkDt ipxI1KTcQMHxQQ/3yfEB6+dMHUjrxBGdXry/MJyBg/RlWZy9TGJvPfHOUTZT8mXy1KkB 1iK+g7SszrvI6crKzRSyRFZjognU3TRle3KvzqOD3cbyXLNksWufz027SbouqX4GqFfV 1A== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3tnhwp88wd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 11 Oct 2023 01:29:41 +0000 Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 39B1T2g4025520; Wed, 11 Oct 2023 01:29:41 GMT Received: from ppma11.dal12v.mail.ibm.com (db.9e.1632.ip4.static.sl-reverse.com [50.22.158.219]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3tnhwp88w2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 11 Oct 2023 01:29:41 +0000 Received: from pps.filterd (ppma11.dal12v.mail.ibm.com [127.0.0.1]) by ppma11.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 39B1Fjpb023021; Wed, 11 Oct 2023 01:29:40 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma11.dal12v.mail.ibm.com (PPS) with ESMTPS id 3tkmc1m9c3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 11 Oct 2023 01:29:40 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 39B1Tarw43450758 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 11 Oct 2023 01:29:37 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C2C7F20043; Wed, 11 Oct 2023 01:29:36 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8DA1720040; Wed, 11 Oct 2023 01:29:35 +0000 (GMT) Received: from genoa.aus.stglabs.ibm.com (unknown [9.40.192.157]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Wed, 11 Oct 2023 01:29:35 +0000 (GMT) From: Jiufu Guo To: gcc-patches@gcc.gnu.org Cc: segher@kernel.crashing.org, dje.gcc@gmail.com, linkw@gcc.gnu.org, bergner@linux.ibm.com, guojiufu@linux.ibm.com Subject: [PATCH] early outs for functions in rs6000.cc Date: Wed, 11 Oct 2023 09:29:34 +0800 Message-Id: <20231011012934.1370966-1-guojiufu@linux.ibm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: RXsPyD6rqKUYJqzZI_DmtPM0pCYpqAhF X-Proofpoint-ORIG-GUID: 4Qe4tGXGtqykNqVpbSxaADKUfASmJOXS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-10_19,2023-10-10_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 clxscore=1015 lowpriorityscore=0 suspectscore=0 spamscore=0 priorityscore=1501 phishscore=0 malwarescore=0 bulkscore=0 impostorscore=0 mlxscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2309180000 definitions=main-2310110011 X-Spam-Status: No, score=-10.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, KAM_STOCKGEN, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Hi, There are some piece of code like below in rs6000.cc: ... if (xx) return x; else if (yy) return y; ... //else if chain else return d; Using early outs would be more preferable for this kind of code. The whole function rs6000_emit_set_long_const and num_insns_constant_gpr are refined. And similar code are also restuctured in rs6000.cc. This patch pass bootstrap and regtest on ppc64{,le}. Is this ok for trunk? BR, Jeff (Jiufu Guo) gcc/ChangeLog: * config/rs6000/rs6000.cc (vspltis_shifted): Adopt early outs. (rs6000_cost_data::determine_suggested_unroll_factor): Likewise. (num_insns_constant_gpr): Likewise. (easy_altivec_constant): Likewise. (output_vec_const_move): Likewise. (rs6000_expand_vector_set): Likewise. (rs6000_legitimize_address): Likewise. (rs6000_emit_set_long_const): Likewise. (rs6000_preferred_reload_class): Likewise. (rs6000_can_change_mode_class): Likewise. (rs6000_output_move_128bit): Likewise. (rs6000_emit_vector_cond_expr): Likewise. (adjacent_mem_locations): Likewise. (is_fusable_store): Likewise. (insn_terminates_group_p): Likewise. (rs6000_elf_reloc_rw_mask): Likewise. (rs6000_rtx_costs): Likewise. (rs6000_scalar_mode_supported_p): Likewise. (rs6000_update_ipa_fn_target_info): Likewise. (reg_to_non_prefixed): Likewise. (rs6000_split_logical_inner): Likewise. (rs6000_opaque_type_invalid_use_p): Likewise. --- gcc/config/rs6000/rs6000.cc | 495 ++++++++++++++++++------------------ 1 file changed, 249 insertions(+), 246 deletions(-) diff --git a/gcc/config/rs6000/rs6000.cc b/gcc/config/rs6000/rs6000.cc index d10d22a5816..43681d9eb08 100644 --- a/gcc/config/rs6000/rs6000.cc +++ b/gcc/config/rs6000/rs6000.cc @@ -5522,24 +5522,22 @@ rs6000_cost_data::determine_suggested_unroll_factor (loop_vec_info loop_vinfo) to vectorize it with the unrolled VF any more if the actual iteration count is in between. */ return 1; - else - { - unsigned int epil_niter_unr = est_niter % unrolled_vf; - unsigned int epil_niter = est_niter % vf; - /* Even if we have partial vector support, it can be still inefficent - to calculate the length when the iteration count is unknown, so - only expect it's good to unroll when the epilogue iteration count - is not bigger than VF (only one time length calculation). */ - if (LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo) - && epil_niter_unr <= vf) - return uf; - /* Without partial vector support, conservatively unroll this when - the epilogue iteration count is less than the original one - (epilogue execution time wouldn't be longer than before). */ - else if (!LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo) - && epil_niter_unr <= epil_niter) - return uf; - } + + unsigned int epil_niter_unr = est_niter % unrolled_vf; + unsigned int epil_niter = est_niter % vf; + /* Even if we have partial vector support, it can be still inefficent + to calculate the length when the iteration count is unknown, so + only expect it's good to unroll when the epilogue iteration count + is not bigger than VF (only one time length calculation). */ + if (LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo) && epil_niter_unr <= vf) + return uf; + + /* Without partial vector support, conservatively unroll this when + the epilogue iteration count is less than the original one + (epilogue execution time wouldn't be longer than before). */ + if (!LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo) + && epil_niter_unr <= epil_niter) + return uf; return 1; } @@ -6044,35 +6042,31 @@ num_insns_constant_gpr (HOST_WIDE_INT value) return 1; /* constant loadable with addis */ - else if ((value & 0xffff) == 0 - && (value >> 31 == -1 || value >> 31 == 0)) + if ((value & 0xffff) == 0 && (value >> 31 == -1 || value >> 31 == 0)) return 1; /* PADDI can support up to 34 bit signed integers. */ - else if (TARGET_PREFIXED && SIGNED_INTEGER_34BIT_P (value)) + if (TARGET_PREFIXED && SIGNED_INTEGER_34BIT_P (value)) return 1; - else if (TARGET_POWERPC64) - { - HOST_WIDE_INT low = sext_hwi (value, 32); - HOST_WIDE_INT high = value >> 31; + if (!TARGET_POWERPC64) + return 2; - if (high == 0 || high == -1) - return 2; + /* TARGET_POWERPC64 */ + HOST_WIDE_INT low = sext_hwi (value, 32); + HOST_WIDE_INT high = value >> 31; + + if (high == 0 || high == -1) + return 2; - high >>= 1; + high >>= 1; - if (low == 0 || low == high) - return num_insns_constant_gpr (high) + 1; - else if (high == 0) - return num_insns_constant_gpr (low) + 1; - else - return (num_insns_constant_gpr (high) - + num_insns_constant_gpr (low) + 1); - } + if (low == 0 || low == high) + return num_insns_constant_gpr (high) + 1; + if (high == 0) + return num_insns_constant_gpr (low) + 1; - else - return 2; + return (num_insns_constant_gpr (high) + num_insns_constant_gpr (low) + 1); } /* Helper for num_insns_constant. Allow constants formed by the @@ -6378,7 +6372,7 @@ vspltis_shifted (rtx op) { if (elt_val == 0) { - for (j = i+1; j < nunits; ++j) + for (j = i + 1; j < nunits; ++j) { unsigned elt2 = BYTES_BIG_ENDIAN ? j : nunits - 1 - j; if (const_vector_elt_as_int (op, elt2) != 0) @@ -6388,9 +6382,9 @@ vspltis_shifted (rtx op) return (nunits - i) * GET_MODE_SIZE (inner); } - else if ((elt_val & mask) == mask) + if ((elt_val & mask) == mask) { - for (j = i+1; j < nunits; ++j) + for (j = i + 1; j < nunits; ++j) { unsigned elt2 = BYTES_BIG_ENDIAN ? j : nunits - 1 - j; if ((const_vector_elt_as_int (op, elt2) & mask) != mask) @@ -6400,8 +6394,7 @@ vspltis_shifted (rtx op) return -((nunits - i) * GET_MODE_SIZE (inner)); } - else - return 0; + return 0; } } @@ -6428,7 +6421,7 @@ easy_altivec_constant (rtx op, machine_mode mode) if (mode == V2DFmode) return zero_constant (op, mode) ? 8 : 0; - else if (mode == V2DImode) + if (mode == V2DImode) { if (!CONST_INT_P (CONST_VECTOR_ELT (op, 0)) || !CONST_INT_P (CONST_VECTOR_ELT (op, 1))) @@ -6445,7 +6438,7 @@ easy_altivec_constant (rtx op, machine_mode mode) } /* V1TImode is a special container for TImode. Ignore for now. */ - else if (mode == V1TImode) + if (mode == V1TImode) return 0; /* Start with a vspltisw. */ @@ -6695,11 +6688,10 @@ output_vec_const_move (rtx *operands) if (TARGET_P9_VECTOR) return "xxspltib %x0,0"; - else if (dest_vmx_p) + if (dest_vmx_p) return "vspltisw %0,0"; - else - return "xxlxor %x0,%x0,%x0"; + return "xxlxor %x0,%x0,%x0"; } if (all_ones_constant (vec, mode)) @@ -6707,14 +6699,13 @@ output_vec_const_move (rtx *operands) if (TARGET_P9_VECTOR) return "xxspltib %x0,255"; - else if (dest_vmx_p) + if (dest_vmx_p) return "vspltisw %0,-1"; - else if (TARGET_P8_VECTOR) + if (TARGET_P8_VECTOR) return "xxlorc %x0,%x0,%x0"; - else - gcc_unreachable (); + gcc_unreachable (); } vec_const_128bit_type vsx_const; @@ -6745,13 +6736,11 @@ output_vec_const_move (rtx *operands) if (TARGET_P9_VECTOR && xxspltib_constant_p (vec, mode, &num_insns, &xxspltib_value)) { - if (num_insns == 1) - { - operands[2] = GEN_INT (xxspltib_value & 0xff); - return "xxspltib %x0,%2"; - } + if (num_insns != 1) + return "#"; - return "#"; + operands[2] = GEN_INT (xxspltib_value & 0xff); + return "xxspltib %x0,%2"; } } @@ -7482,13 +7471,14 @@ rs6000_expand_vector_set (rtx target, rtx val, rtx elt_rtx) rs6000_expand_vector_set_var_p9 (target, val, elt_rtx); return; } - else if (TARGET_VSX) + + if (TARGET_VSX) { rs6000_expand_vector_set_var_p7 (target, val, elt_rtx); return; } - else - gcc_assert (CONST_INT_P (elt_rtx)); + + gcc_assert (CONST_INT_P (elt_rtx)); } rtx insn = NULL_RTX; @@ -9095,35 +9085,37 @@ legitimate_lo_sum_address_p (machine_mode mode, rtx x, int strict) return CONSTANT_P (x) || large_toc_ok; } - else if (TARGET_MACHO) - { - if (GET_MODE_NUNITS (mode) != 1) - return false; - if (GET_MODE_SIZE (mode) > UNITS_PER_WORD - && !(/* see above */ - TARGET_HARD_FLOAT && (mode == DFmode || mode == DDmode))) - return false; + + if (!TARGET_MACHO) + return false; + + /* TARGET_MACHO */ + if (GET_MODE_NUNITS (mode) != 1) + return false; + if (GET_MODE_SIZE (mode) > UNITS_PER_WORD + && !(/* see above */ + TARGET_HARD_FLOAT && (mode == DFmode || mode == DDmode))) + return false; + #if TARGET_MACHO - if (MACHO_DYNAMIC_NO_PIC_P || !flag_pic) - return CONSTANT_P (x); + if (MACHO_DYNAMIC_NO_PIC_P || !flag_pic) + return CONSTANT_P (x); #endif - /* Macho-O PIC code from here. */ - if (GET_CODE (x) == CONST) - x = XEXP (x, 0); - /* SYMBOL_REFs need to be wrapped in an UNSPEC_MACHOPIC_OFFSET. */ - if (SYMBOL_REF_P (x)) - return false; + /* Macho-O PIC code from here. */ + if (GET_CODE (x) == CONST) + x = XEXP (x, 0); - /* So this is OK if the wrapped object is const. */ - if (GET_CODE (x) == UNSPEC - && XINT (x, 1) == UNSPEC_MACHOPIC_OFFSET) - return CONSTANT_P (XVECEXP (x, 0, 0)); - return CONSTANT_P (x); - } - return false; -} + /* SYMBOL_REFs need to be wrapped in an UNSPEC_MACHOPIC_OFFSET. */ + if (SYMBOL_REF_P (x)) + return false; + /* So this is OK if the wrapped object is const. */ + if (GET_CODE (x) == UNSPEC && XINT (x, 1) == UNSPEC_MACHOPIC_OFFSET) + return CONSTANT_P (XVECEXP (x, 0, 0)); + + return CONSTANT_P (x); +} /* Try machine-dependent ways of modifying an illegitimate address to be legitimate. If we find one, return the new, valid address. @@ -9166,13 +9158,13 @@ rs6000_legitimize_address (rtx x, rtx oldx ATTRIBUTE_UNUSED, /* For TImode with load/store quad, restrict addresses to just a single pointer, so it works with both GPRs and VSX registers. */ /* Make sure both operands are registers. */ - else if (GET_CODE (x) == PLUS + if (GET_CODE (x) == PLUS && (mode != TImode || !TARGET_VSX)) return gen_rtx_PLUS (Pmode, force_reg (Pmode, XEXP (x, 0)), force_reg (Pmode, XEXP (x, 1))); - else - return force_reg (Pmode, x); + + return force_reg (Pmode, x); } if (SYMBOL_REF_P (x) && !TARGET_MACHO) { @@ -9217,7 +9209,8 @@ rs6000_legitimize_address (rtx x, rtx oldx ATTRIBUTE_UNUSED, gen_int_mode (high_int, Pmode)), 0); return plus_constant (Pmode, sum, low_int); } - else if (GET_CODE (x) == PLUS + + if (GET_CODE (x) == PLUS && REG_P (XEXP (x, 0)) && !CONST_INT_P (XEXP (x, 1)) && GET_MODE_NUNITS (mode) == 1 @@ -9225,11 +9218,10 @@ rs6000_legitimize_address (rtx x, rtx oldx ATTRIBUTE_UNUSED, || (/* ??? Assume floating point reg based on mode? */ TARGET_HARD_FLOAT && (mode == DFmode || mode == DDmode))) && !avoiding_indexed_address_p (mode)) - { - return gen_rtx_PLUS (Pmode, XEXP (x, 0), - force_reg (Pmode, force_operand (XEXP (x, 1), 0))); - } - else if ((TARGET_ELF + return gen_rtx_PLUS (Pmode, XEXP (x, 0), + force_reg (Pmode, force_operand (XEXP (x, 1), 0))); + + if ((TARGET_ELF #if TARGET_MACHO || !MACHO_DYNAMIC_NO_PIC_P #endif @@ -9253,13 +9245,14 @@ rs6000_legitimize_address (rtx x, rtx oldx ATTRIBUTE_UNUSED, emit_insn (gen_macho_high (Pmode, reg, x)); return gen_rtx_LO_SUM (Pmode, reg, x); } - else if (TARGET_TOC + + if (TARGET_TOC && SYMBOL_REF_P (x) && constant_pool_expr_p (x) && ASM_OUTPUT_SPECIAL_POOL_ENTRY_P (get_pool_constant (x), Pmode)) return create_TOC_reference (x, NULL_RTX); - else - return x; + + return x; } /* Debug version of rs6000_legitimize_address. */ @@ -10487,11 +10480,14 @@ rs6000_emit_set_long_const (rtx dest, HOST_WIDE_INT c) ud4 = (c >> 48) & 0xffff; if ((ud4 == 0xffff && ud3 == 0xffff && ud2 == 0xffff && (ud1 & 0x8000)) - || (ud4 == 0 && ud3 == 0 && ud2 == 0 && ! (ud1 & 0x8000))) - emit_move_insn (dest, GEN_INT (sext_hwi (ud1, 16))); + || (ud4 == 0 && ud3 == 0 && ud2 == 0 && !(ud1 & 0x8000))) + { + emit_move_insn (dest, GEN_INT (sext_hwi (ud1, 16))); + return; + } - else if ((ud4 == 0xffff && ud3 == 0xffff && (ud2 & 0x8000)) - || (ud4 == 0 && ud3 == 0 && ! (ud2 & 0x8000))) + if ((ud4 == 0xffff && ud3 == 0xffff && (ud2 & 0x8000)) + || (ud4 == 0 && ud3 == 0 && !(ud2 & 0x8000))) { temp = !can_create_pseudo_p () ? dest : gen_reg_rtx (DImode); @@ -10499,26 +10495,35 @@ rs6000_emit_set_long_const (rtx dest, HOST_WIDE_INT c) GEN_INT (sext_hwi (ud2 << 16, 32))); if (ud1 != 0) emit_move_insn (dest, gen_rtx_IOR (DImode, temp, GEN_INT (ud1))); + + return; } - else if (ud4 == 0xffff && ud3 == 0xffff && !(ud2 & 0x8000) && ud1 == 0) + + if (ud4 == 0xffff && ud3 == 0xffff && !(ud2 & 0x8000) && ud1 == 0) { /* lis; xoris */ temp = !can_create_pseudo_p () ? dest : gen_reg_rtx (DImode); emit_move_insn (temp, GEN_INT (sext_hwi ((ud2 | 0x8000) << 16, 32))); emit_move_insn (dest, gen_rtx_XOR (DImode, temp, GEN_INT (0x80000000))); + + return; } - else if (ud4 == 0xffff && ud3 == 0xffff && (ud1 & 0x8000)) + + if (ud4 == 0xffff && ud3 == 0xffff && (ud1 & 0x8000)) { /* li; xoris */ temp = !can_create_pseudo_p () ? dest : gen_reg_rtx (DImode); emit_move_insn (temp, GEN_INT (sext_hwi (ud1, 16))); emit_move_insn (dest, gen_rtx_XOR (DImode, temp, GEN_INT ((ud2 ^ 0xffff) << 16))); + return; } - else if (can_be_built_by_li_lis_and_rotldi (c, &shift, &mask) - || can_be_built_by_li_lis_and_rldicl (c, &shift, &mask) - || can_be_built_by_li_lis_and_rldicr (c, &shift, &mask) - || can_be_built_by_li_and_rldic (c, &shift, &mask)) + + /* li/lis; rldicl/rldicr/rldic */ + if (can_be_built_by_li_lis_and_rotldi (c, &shift, &mask) + || can_be_built_by_li_lis_and_rldicl (c, &shift, &mask) + || can_be_built_by_li_lis_and_rldicr (c, &shift, &mask) + || can_be_built_by_li_and_rldic (c, &shift, &mask)) { temp = !can_create_pseudo_p () ? dest : gen_reg_rtx (DImode); unsigned HOST_WIDE_INT imm = (c | ~mask); @@ -10530,8 +10535,11 @@ rs6000_emit_set_long_const (rtx dest, HOST_WIDE_INT c) if (mask != HOST_WIDE_INT_M1) temp = gen_rtx_AND (DImode, temp, GEN_INT (mask)); emit_move_insn (dest, temp); + + return; } - else if (ud3 == 0 && ud4 == 0) + + if (ud3 == 0 && ud4 == 0) { temp = !can_create_pseudo_p () ? dest : gen_reg_rtx (DImode); @@ -10543,24 +10551,27 @@ rs6000_emit_set_long_const (rtx dest, HOST_WIDE_INT c) emit_move_insn (temp, GEN_INT (sext_hwi (ud2 << 16, 32))); emit_move_insn (dest, gen_rtx_AND (DImode, temp, GEN_INT (0xffffffff))); + return; } - else if (!(ud1 & 0x8000)) + + if (!(ud1 & 0x8000)) { /* li; oris */ emit_move_insn (temp, GEN_INT (ud1)); emit_move_insn (dest, gen_rtx_IOR (DImode, temp, GEN_INT (ud2 << 16))); + return; } - else - { - /* lis; ori; rldicl */ - emit_move_insn (temp, GEN_INT (sext_hwi (ud2 << 16, 32))); - emit_move_insn (temp, gen_rtx_IOR (DImode, temp, GEN_INT (ud1))); - emit_move_insn (dest, - gen_rtx_AND (DImode, temp, GEN_INT (0xffffffff))); - } + + /* lis; ori; rldicl */ + emit_move_insn (temp, GEN_INT (sext_hwi (ud2 << 16, 32))); + emit_move_insn (temp, gen_rtx_IOR (DImode, temp, GEN_INT (ud1))); + emit_move_insn (dest, gen_rtx_AND (DImode, temp, GEN_INT (0xffffffff))); + + return; } - else if (ud1 == ud3 && ud2 == ud4) + + if (ud1 == ud3 && ud2 == ud4) { temp = !can_create_pseudo_p () ? dest : gen_reg_rtx (DImode); HOST_WIDE_INT num = (ud2 << 16) | ud1; @@ -10568,9 +10579,11 @@ rs6000_emit_set_long_const (rtx dest, HOST_WIDE_INT c) rtx one = gen_rtx_AND (DImode, temp, GEN_INT (0xffffffff)); rtx two = gen_rtx_ASHIFT (DImode, temp, GEN_INT (32)); emit_move_insn (dest, gen_rtx_IOR (DImode, one, two)); + + return; } - else if ((ud4 == 0xffff && (ud3 & 0x8000)) - || (ud4 == 0 && ! (ud3 & 0x8000))) + + if ((ud4 == 0xffff && (ud3 & 0x8000)) || (ud4 == 0 && !(ud3 & 0x8000))) { temp = !can_create_pseudo_p () ? dest : gen_reg_rtx (DImode); @@ -10581,8 +10594,11 @@ rs6000_emit_set_long_const (rtx dest, HOST_WIDE_INT c) gen_rtx_ASHIFT (DImode, temp, GEN_INT (16))); if (ud1 != 0) emit_move_insn (dest, gen_rtx_IOR (DImode, temp, GEN_INT (ud1))); + + return; } - else if (TARGET_PREFIXED) + + if (TARGET_PREFIXED) { if (can_create_pseudo_p ()) { @@ -10594,60 +10610,55 @@ rs6000_emit_set_long_const (rtx dest, HOST_WIDE_INT c) emit_insn (gen_rotldi3_insert_3 (dest, temp, GEN_INT (32), temp1, GEN_INT (0xffffffff))); + return; } - else - { - /* pli A,H + sldi A,32 + paddi A,A,L. */ - emit_move_insn (dest, GEN_INT ((ud4 << 16) | ud3)); - emit_move_insn (dest, gen_rtx_ASHIFT (DImode, dest, GEN_INT (32))); + /* pli A,H + sldi A,32 + paddi A,A,L. */ + emit_move_insn (dest, GEN_INT ((ud4 << 16) | ud3)); + emit_move_insn (dest, gen_rtx_ASHIFT (DImode, dest, GEN_INT (32))); - bool can_use_paddi = REGNO (dest) != FIRST_GPR_REGNO; + bool can_use_paddi = REGNO (dest) != FIRST_GPR_REGNO; + /* Use paddi for the low 32 bits. */ + if (ud2 != 0 && ud1 != 0 && can_use_paddi) + emit_move_insn (dest, gen_rtx_PLUS (DImode, dest, + GEN_INT ((ud2 << 16) | ud1))); - /* Use paddi for the low 32 bits. */ - if (ud2 != 0 && ud1 != 0 && can_use_paddi) - emit_move_insn (dest, gen_rtx_PLUS (DImode, dest, - GEN_INT ((ud2 << 16) | ud1))); + /* Use oris, ori for low 32 bits. */ + if (ud2 != 0 && (ud1 == 0 || !can_use_paddi)) + emit_move_insn (dest, gen_rtx_IOR (DImode, dest, GEN_INT (ud2 << 16))); + if (ud1 != 0 && (ud2 == 0 || !can_use_paddi)) + emit_move_insn (dest, gen_rtx_IOR (DImode, dest, GEN_INT (ud1))); - /* Use oris, ori for low 32 bits. */ - if (ud2 != 0 && (ud1 == 0 || !can_use_paddi)) - emit_move_insn (dest, - gen_rtx_IOR (DImode, dest, GEN_INT (ud2 << 16))); - if (ud1 != 0 && (ud2 == 0 || !can_use_paddi)) - emit_move_insn (dest, gen_rtx_IOR (DImode, dest, GEN_INT (ud1))); - } + return; } - else - { - if (can_create_pseudo_p ()) - { - /* lis HIGH,UD4 ; ori HIGH,UD3 ; - lis LOW,UD2 ; ori LOW,UD1 ; rldimi LOW,HIGH,32,0. */ - rtx high = gen_reg_rtx (DImode); - rtx low = gen_reg_rtx (DImode); - HOST_WIDE_INT num = (ud2 << 16) | ud1; - rs6000_emit_set_long_const (low, sext_hwi (num, 32)); - num = (ud4 << 16) | ud3; - rs6000_emit_set_long_const (high, sext_hwi (num, 32)); - emit_insn (gen_rotldi3_insert_3 (dest, high, GEN_INT (32), low, - GEN_INT (0xffffffff))); - } - else - { - /* lis DEST,UD4 ; ori DEST,UD3 ; rotl DEST,32 ; - oris DEST,UD2 ; ori DEST,UD1. */ - emit_move_insn (dest, GEN_INT (sext_hwi (ud4 << 16, 32))); - if (ud3 != 0) - emit_move_insn (dest, gen_rtx_IOR (DImode, dest, GEN_INT (ud3))); - emit_move_insn (dest, gen_rtx_ASHIFT (DImode, dest, GEN_INT (32))); - if (ud2 != 0) - emit_move_insn (dest, - gen_rtx_IOR (DImode, dest, GEN_INT (ud2 << 16))); - if (ud1 != 0) - emit_move_insn (dest, gen_rtx_IOR (DImode, dest, GEN_INT (ud1))); - } + /* using 5 insn to build the constant. */ + if (can_create_pseudo_p ()) + { + /* lis HIGH,UD4 ; ori HIGH,UD3 ; + lis LOW,UD2 ; ori LOW,UD1 ; rldimi LOW,HIGH,32,0. */ + rtx high = gen_reg_rtx (DImode); + rtx low = gen_reg_rtx (DImode); + HOST_WIDE_INT num = (ud2 << 16) | ud1; + rs6000_emit_set_long_const (low, sext_hwi (num, 32)); + num = (ud4 << 16) | ud3; + rs6000_emit_set_long_const (high, sext_hwi (num, 32)); + emit_insn (gen_rotldi3_insert_3 (dest, high, GEN_INT (32), low, + GEN_INT (0xffffffff))); + return; } + + /* lis DEST,UD4 ; ori DEST,UD3 ; rotl DEST,32 ; + oris DEST,UD2 ; ori DEST,UD1. */ + emit_move_insn (dest, GEN_INT (sext_hwi (ud4 << 16, 32))); + if (ud3 != 0) + emit_move_insn (dest, gen_rtx_IOR (DImode, dest, GEN_INT (ud3))); + + emit_move_insn (dest, gen_rtx_ASHIFT (DImode, dest, GEN_INT (32))); + if (ud2 != 0) + emit_move_insn (dest, gen_rtx_IOR (DImode, dest, GEN_INT (ud2 << 16))); + if (ud1 != 0) + emit_move_insn (dest, gen_rtx_IOR (DImode, dest, GEN_INT (ud1))); } /* Helper for the following. Get rid of [r+r] memory refs @@ -13356,10 +13367,10 @@ rs6000_preferred_reload_class (rtx x, enum reg_class rclass) { if (TARGET_P8_VECTOR) return rclass; - else if (rclass == ALTIVEC_REGS || rclass == VSX_REGS) + if (rclass == ALTIVEC_REGS || rclass == VSX_REGS) return ALTIVEC_REGS; - else - return NO_REGS; + + return NO_REGS; } /* ISA 3.0 can load -128..127 using the XXSPLTIB instruction and @@ -13628,7 +13639,7 @@ rs6000_can_change_mode_class (machine_mode from, if (to_float128_vector_p && from_float128_vector_p) return true; - else if (to_float128_vector_p || from_float128_vector_p) + if (to_float128_vector_p || from_float128_vector_p) return false; /* TDmode in floating-mode registers must always go into a register @@ -13757,7 +13768,7 @@ rs6000_output_move_128bit (rtx operands[]) ? "mfvsrd %0,%x1\n\tmfvsrld %L0,%x1" : "mfvsrd %L0,%x1\n\tmfvsrld %0,%x1"); - else if (TARGET_VSX && TARGET_DIRECT_MOVE && src_vsx_p) + if (TARGET_VSX && TARGET_DIRECT_MOVE && src_vsx_p) return "#"; } @@ -13766,12 +13777,12 @@ rs6000_output_move_128bit (rtx operands[]) if (src_vsx_p) return "xxlor %x0,%x1,%x1"; - else if (TARGET_DIRECT_MOVE_128 && src_gpr_p) + if (TARGET_DIRECT_MOVE_128 && src_gpr_p) return (WORDS_BIG_ENDIAN ? "mtvsrdd %x0,%1,%L1" : "mtvsrdd %x0,%L1,%1"); - else if (TARGET_DIRECT_MOVE && src_gpr_p) + if (TARGET_DIRECT_MOVE && src_gpr_p) return "#"; } @@ -13789,34 +13800,33 @@ rs6000_output_move_128bit (rtx operands[]) { if (TARGET_QUAD_MEMORY && quad_load_store_p (dest, src)) return "lq %0,%1"; - else - return "#"; + + return "#"; } - else if (TARGET_ALTIVEC && dest_vmx_p - && altivec_indexed_or_indirect_operand (src, mode)) + if (TARGET_ALTIVEC && dest_vmx_p + && altivec_indexed_or_indirect_operand (src, mode)) return "lvx %0,%y1"; - else if (TARGET_VSX && dest_vsx_p) + if (TARGET_VSX && dest_vsx_p) { if (mode_supports_dq_form (mode) && quad_address_p (XEXP (src, 0), mode, true)) return "lxv %x0,%1"; - else if (TARGET_P9_VECTOR) + if (TARGET_P9_VECTOR) return "lxvx %x0,%y1"; - else if (mode == V16QImode || mode == V8HImode || mode == V4SImode) + if (mode == V16QImode || mode == V8HImode || mode == V4SImode) return "lxvw4x %x0,%y1"; - else - return "lxvd2x %x0,%y1"; + return "lxvd2x %x0,%y1"; } - else if (TARGET_ALTIVEC && dest_vmx_p) + if (TARGET_ALTIVEC && dest_vmx_p) return "lvx %0,%y1"; - else if (dest_fp_p) + if (dest_fp_p) return "#"; } @@ -13825,51 +13835,47 @@ rs6000_output_move_128bit (rtx operands[]) { if (src_gpr_p) { - if (TARGET_QUAD_MEMORY && quad_load_store_p (dest, src)) + if (TARGET_QUAD_MEMORY && quad_load_store_p (dest, src)) return "stq %1,%0"; - else - return "#"; + + return "#"; } - else if (TARGET_ALTIVEC && src_vmx_p - && altivec_indexed_or_indirect_operand (dest, mode)) + if (TARGET_ALTIVEC && src_vmx_p + && altivec_indexed_or_indirect_operand (dest, mode)) return "stvx %1,%y0"; - else if (TARGET_VSX && src_vsx_p) + if (TARGET_VSX && src_vsx_p) { if (mode_supports_dq_form (mode) && quad_address_p (XEXP (dest, 0), mode, true)) return "stxv %x1,%0"; - else if (TARGET_P9_VECTOR) + if (TARGET_P9_VECTOR) return "stxvx %x1,%y0"; - else if (mode == V16QImode || mode == V8HImode || mode == V4SImode) + if (mode == V16QImode || mode == V8HImode || mode == V4SImode) return "stxvw4x %x1,%y0"; - else - return "stxvd2x %x1,%y0"; + return "stxvd2x %x1,%y0"; } - else if (TARGET_ALTIVEC && src_vmx_p) + if (TARGET_ALTIVEC && src_vmx_p) return "stvx %1,%y0"; - else if (src_fp_p) + if (src_fp_p) return "#"; } /* Constants. */ else if (dest_regno >= 0 - && (CONST_INT_P (src) - || CONST_WIDE_INT_P (src) - || CONST_DOUBLE_P (src) - || GET_CODE (src) == CONST_VECTOR)) + && (CONST_INT_P (src) || CONST_WIDE_INT_P (src) + || CONST_DOUBLE_P (src) || GET_CODE (src) == CONST_VECTOR)) { if (dest_gpr_p) return "#"; - else if ((dest_vmx_p && TARGET_ALTIVEC) - || (dest_vsx_p && TARGET_VSX)) + if ((dest_vmx_p && TARGET_ALTIVEC) || (dest_vsx_p && TARGET_VSX)) return output_vec_const_move (operands); } @@ -16201,7 +16207,7 @@ rs6000_emit_vector_cond_expr (rtx dest, rtx op_true, rtx op_false, return 1; } - else if (op_true == constant_0 && op_false == constant_m1) + if (op_true == constant_0 && op_false == constant_m1) { emit_insn (gen_rtx_SET (dest, gen_rtx_NOT (dest_mode, mask))); return 1; @@ -18626,7 +18632,7 @@ adjacent_mem_locations (rtx mem1, rtx mem2) { if (off1 + size1 == off2) return mem1; - else if (off2 + size2 == off1) + if (off2 + size2 == off1) return mem2; } @@ -19425,7 +19431,7 @@ is_fusable_store (rtx_insn *insn, rtx *str_mem) if (INTEGRAL_MODE_P (mode)) /* Must be word or dword size. */ return (size == 4 || size == 8); - else if (FLOAT_MODE_P (mode)) + if (FLOAT_MODE_P (mode)) /* Must be dword size. */ return (size == 8); } @@ -19572,7 +19578,7 @@ insn_terminates_group_p (rtx_insn *insn, enum group_termination which_group) if (which_group == current_group) return last; - else if (which_group == previous_group) + if (which_group == previous_group) return first; return false; @@ -21189,10 +21195,10 @@ rs6000_elf_reloc_rw_mask (void) { if (flag_pic) return 3; - else if (DEFAULT_ABI == ABI_AIX || DEFAULT_ABI == ABI_ELFv2) + if (DEFAULT_ABI == ABI_AIX || DEFAULT_ABI == ABI_ELFv2) return 2; - else - return 0; + + return 0; } /* Record an element in the table of global constructors. SYMBOL is @@ -21617,28 +21623,26 @@ rs6000_xcoff_select_section (tree decl, int reloc, { if (TREE_PUBLIC (decl)) return read_only_data_section; - else - return read_only_private_data_section; + + return read_only_private_data_section; } - else - { + #if HAVE_AS_TLS - if (TREE_CODE (decl) == VAR_DECL && DECL_THREAD_LOCAL_P (decl)) - { - if (bss_initializer_p (decl)) - return tls_comm_section; - else if (TREE_PUBLIC (decl)) - return tls_data_section; - else - return tls_private_data_section; - } - else -#endif - if (TREE_PUBLIC (decl)) - return data_section; - else - return private_data_section; + if (TREE_CODE (decl) == VAR_DECL && DECL_THREAD_LOCAL_P (decl)) + { + if (bss_initializer_p (decl)) + return tls_comm_section; + if (TREE_PUBLIC (decl)) + return tls_data_section; + + return tls_private_data_section; } +#endif + + if (TREE_PUBLIC (decl)) + return data_section; + + return private_data_section; } static void @@ -22504,7 +22508,7 @@ rs6000_rtx_costs (rtx x, machine_mode mode, int outer_code, *total = COSTS_N_INSNS (1); return true; } - else if (FLOAT_MODE_P (mode) && TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT) + if (FLOAT_MODE_P (mode) && TARGET_PPC_GFXOPT && TARGET_HARD_FLOAT) { *total = rs6000_cost->fp; return false; @@ -24232,10 +24236,10 @@ rs6000_scalar_mode_supported_p (scalar_mode mode) if (DECIMAL_FLOAT_MODE_P (mode)) return default_decimal_float_supported_p (); - else if (TARGET_FLOAT128_TYPE && (mode == KFmode || mode == IFmode)) + if (TARGET_FLOAT128_TYPE && (mode == KFmode || mode == IFmode)) return true; - else - return default_scalar_mode_supported_p (mode); + + return default_scalar_mode_supported_p (mode); } /* Target hook for libgcc_floating_mode_supported_p. */ @@ -25670,7 +25674,8 @@ rs6000_update_ipa_fn_target_info (unsigned int &info, const gimple *stmt) info |= RS6000_FN_TARGET_INFO_HTM; return false; } - else if (gimple_code (stmt) == GIMPLE_CALL) + + if (gimple_code (stmt) == GIMPLE_CALL) { tree fndecl = gimple_call_fndecl (stmt); if (fndecl && fndecl_built_in_p (fndecl, BUILT_IN_MD)) @@ -26725,23 +26730,22 @@ reg_to_non_prefixed (rtx reg, machine_mode mode) if (mode == SFmode || size == 8 || FLOAT128_2REG_P (mode)) return NON_PREFIXED_D; - else if (size < 8) + if (size < 8) return NON_PREFIXED_X; - else if (TARGET_VSX && size >= 16 + if (TARGET_VSX && size >= 16 && (VECTOR_MODE_P (mode) || VECTOR_ALIGNMENT_P (mode) || mode == TImode || mode == CTImode)) return (TARGET_P9_VECTOR) ? NON_PREFIXED_DQ : NON_PREFIXED_X; - else - return NON_PREFIXED_DEFAULT; + return NON_PREFIXED_DEFAULT; } /* Altivec registers use DS-mode for scalars, and DQ-mode for vectors, IEEE 128-bit floating point, and 128-bit integers. Before power9, only indexed addressing was available. */ - else if (ALTIVEC_REGNO_P (r)) + if (ALTIVEC_REGNO_P (r)) { if (!TARGET_P9_VECTOR) return NON_PREFIXED_X; @@ -26749,23 +26753,22 @@ reg_to_non_prefixed (rtx reg, machine_mode mode) if (mode == SFmode || size == 8 || FLOAT128_2REG_P (mode)) return NON_PREFIXED_DS; - else if (size < 8) + if (size < 8) return NON_PREFIXED_X; - else if (TARGET_VSX && size >= 16 + if (TARGET_VSX && size >= 16 && (VECTOR_MODE_P (mode) || VECTOR_ALIGNMENT_P (mode) || mode == TImode || mode == CTImode)) return NON_PREFIXED_DQ; - else - return NON_PREFIXED_DEFAULT; + return NON_PREFIXED_DEFAULT; } /* GPR registers use DS-mode for 64-bit items on 64-bit systems, and D-mode otherwise. Assume that any other register, such as LR, CRs, etc. will go through the GPR registers for memory operations. */ - else if (TARGET_POWERPC64 && size >= 8) + if (TARGET_POWERPC64 && size >= 8) return NON_PREFIXED_DS; return NON_PREFIXED_D; @@ -27125,7 +27128,7 @@ rs6000_split_logical_inner (rtx dest, return; } - else if (value == mask) + if (value == mask) { if (!rtx_equal_p (dest, op1)) emit_insn (gen_rtx_SET (dest, op1)); @@ -29234,7 +29237,7 @@ rs6000_opaque_type_invalid_use_p (gimple *stmt) error ("type %<__vector_quad%> requires the %qs option", "-mmma"); return true; } - else if (mv == vector_pair_type_node) + if (mv == vector_pair_type_node) { error ("type %<__vector_pair%> requires the %qs option", "-mmma"); return true;