From patchwork Wed Aug 28 14:34:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 1977910 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Wv6Pj31ndz1yXd for ; Thu, 29 Aug 2024 00:34:48 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 16DF9385C6C1 for ; Wed, 28 Aug 2024 14:34:46 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 73B72385C6C1 for ; Wed, 28 Aug 2024 14:34:23 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 73B72385C6C1 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 73B72385C6C1 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1724855665; cv=none; b=HLoOKlkiAPhBxwdX+xLnWBNt6qeLXVw7aNVgepwKVG99KLoIpok6Xj996fkjsV4xRcCX52/U5mAgugV3QQfKIZ1YlFqQgxUcF2arzYzdssbGHGoeAQEZl17TGw0eo/eDtMdE4Zazdi8JRS/1trdH8LTuoyo9NX26adMlAMuVrKY= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1724855665; c=relaxed/simple; bh=MPeXfe9//SYuBtyqFpbDFl25Fhy4IyQomgua4h0PMDk=; h=From:To:Subject:Date:Message-ID:MIME-Version; b=QezY3FGV/XGo7lOffcsoUaC2weRPD3ZNiQRYGdFNIZRAKWvya3cKqS6PeJntsSLOyQUNKnFNBZI72cIbPGk/bae/2ESwKHW8g9e/KlOihl01clVK6QapLtHniwjwlrEJcq+O7+Wu3T/Hs/vQSHtb6kRzsUMGIod2VtQCck00q8k= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 368B5DA7 for ; Wed, 28 Aug 2024 07:34:49 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A36423F762 for ; Wed, 28 Aug 2024 07:34:22 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH] Make some smallest_int_mode_for_size calls cope with failure Date: Wed, 28 Aug 2024 15:34:21 +0100 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Spam-Status: No, score=-18.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org smallest_int_mode_for_size now returns an optional mode rather than aborting on failure. This patch adjusts a couple of callers so that they fail gracefully when no mode exists. There should be no behavioural change, since anything that triggers the new return paths would previously have aborted. I just think this is how the code would have been written if the option had been available earlier. Tested on aarch64-linux-gnu. OK to install? Richard gcc/ * dse.cc (find_shift_sequence): Allow smallest_int_mode_for_size to failure. * optabs.cc (expand_twoval_binop_libfunc): Likewise. --- gcc/dse.cc | 16 ++++++++-------- gcc/optabs.cc | 6 ++++-- 2 files changed, 12 insertions(+), 10 deletions(-) diff --git a/gcc/dse.cc b/gcc/dse.cc index c3feff06f86..75825a44cb9 100644 --- a/gcc/dse.cc +++ b/gcc/dse.cc @@ -1717,12 +1717,12 @@ dump_insn_info (const char * start, insn_info_t insn_info) line up, we need to extract the value from lower part of the rhs of the store, shift it, and then put it into a form that can be shoved into the read_insn. This function generates a right SHIFT of a - value that is at least ACCESS_SIZE bytes wide of READ_MODE. The + value that is at least ACCESS_BYTES bytes wide of READ_MODE. The shift sequence is returned or NULL if we failed to find a shift. */ static rtx -find_shift_sequence (poly_int64 access_size, +find_shift_sequence (poly_int64 access_bytes, store_info *store_info, machine_mode read_mode, poly_int64 shift, bool speed, bool require_cst) @@ -1734,11 +1734,11 @@ find_shift_sequence (poly_int64 access_size, /* If a constant was stored into memory, try to simplify it here, otherwise the cost of the shift might preclude this optimization e.g. at -Os, even when no actual shift will be needed. */ + auto access_bits = access_bytes * BITS_PER_UNIT; if (store_info->const_rhs - && known_le (access_size, GET_MODE_SIZE (MAX_MODE_INT))) + && known_le (access_bytes, GET_MODE_SIZE (MAX_MODE_INT)) + && smallest_int_mode_for_size (access_bits).exists (&new_mode)) { - auto new_mode = smallest_int_mode_for_size - (access_size * BITS_PER_UNIT).require (); auto byte = subreg_lowpart_offset (new_mode, store_mode); rtx ret = simplify_subreg (new_mode, store_info->const_rhs, store_mode, byte); @@ -1810,7 +1810,7 @@ find_shift_sequence (poly_int64 access_size, } } - if (maybe_lt (GET_MODE_SIZE (new_mode), access_size)) + if (maybe_lt (GET_MODE_SIZE (new_mode), access_bytes)) continue; new_reg = gen_reg_rtx (new_mode); @@ -1839,8 +1839,8 @@ find_shift_sequence (poly_int64 access_size, of the arguments and could be precomputed. It may not be worth doing so. We could precompute if worthwhile or at least cache the results. The result - technically depends on both SHIFT and ACCESS_SIZE, - but in practice the answer will depend only on ACCESS_SIZE. */ + technically depends on both SHIFT and ACCESS_BYTES, + but in practice the answer will depend only on ACCESS_BYTES. */ if (cost > COSTS_N_INSNS (1)) continue; diff --git a/gcc/optabs.cc b/gcc/optabs.cc index ded9cc3d947..2bcb3f7b47a 100644 --- a/gcc/optabs.cc +++ b/gcc/optabs.cc @@ -2551,8 +2551,10 @@ expand_twoval_binop_libfunc (optab binoptab, rtx op0, rtx op1, /* The value returned by the library function will have twice as many bits as the nominal MODE. */ - libval_mode - = smallest_int_mode_for_size (2 * GET_MODE_BITSIZE (mode)).require (); + auto return_size = 2 * GET_MODE_BITSIZE (mode); + if (!smallest_int_mode_for_size (return_size).exists (&libval_mode)) + return false; + start_sequence (); libval = emit_library_call_value (libfunc, NULL_RTX, LCT_CONST, libval_mode,