From patchwork Tue Jan 24 13:54:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Andre Vieira (lists)" X-Patchwork-Id: 1731215 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=RfboPMzm; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4P1T4F10sWz23gM for ; Wed, 25 Jan 2023 00:54:51 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 08DAB3858CDB for ; Tue, 24 Jan 2023 13:54:49 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 08DAB3858CDB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1674568489; bh=767JUpZSGF3Gvkk1j7KNDOS6mrfv+amMAbJv8ujbf/8=; h=Date:Subject:To:References:Cc:In-Reply-To:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=RfboPMzmzWeT/E7u/yT+WC1H8A9x2lDaAx0vZOrhLH00OBRzVUGXetQQO8brkrV9I pBZ8d4ksS7h4XZGnvdd2RppeaGkmKhE1YQkbNLixBjRJmes1E/ZHUcTrYiCJPVQuVH Db413rHkNmZSmtbxXlFXOfTHyNzLdWl0eHQo6h2o= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id F32263858D1E for ; Tue, 24 Jan 2023 13:54:28 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org F32263858D1E Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6C5BFC14; Tue, 24 Jan 2023 05:55:10 -0800 (PST) Received: from [10.57.75.149] (unknown [10.57.75.149]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6C7C03F71E; Tue, 24 Jan 2023 05:54:27 -0800 (PST) Message-ID: <22ba05fb-774e-62b8-64a2-90c5d73fcaba@arm.com> Date: Tue, 24 Jan 2023 13:54:20 +0000 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: [PATCH 2/3] arm: Remove unnecessary zero-extending of MVE predicates before use [PR 107674] Content-Language: en-US To: "gcc-patches@gcc.gnu.org" References: <13d03aef-f5d1-03fe-5281-31921d24dce0@arm.com> Cc: Richard Sandiford , Richard Earnshaw , Richard Biener , Kyrylo Tkachov In-Reply-To: <13d03aef-f5d1-03fe-5281-31921d24dce0@arm.com> X-Spam-Status: No, score=-16.5 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, KAM_LOTSOFHASH, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: "Andre Vieira \(lists\) via Gcc-patches" From: "Andre Vieira (lists)" Reply-To: "Andre Vieira \(lists\)" Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" Hi, This patch teaches GCC that zero-extending a MVE predicate from 16-bits to 32-bits and then only using 16-bits is a no-op. It does so in two steps: - it lets gcc know that it can access any MVE predicate mode using any other MVE predicate mode without needing to copy it, using the TARGET_MODES_TIEABLE_P hook, - it teaches simplify_subreg to optimize a subreg with a vector outermode, by replacing this outermode with a same-sized integer mode and trying the avalailable optimizations, then if successful it surrounds the result with a subreg casting it back to the original vector outermode. This removes the unnecessary zero-extending shown on PR 107674 (though it's a sign-extend there), that was introduced in gcc 11. Bootstrapped on aarch64-none-linux-gnu and regression tested on arm-none-eabi and armeb-none-eabi for armv8.1-m.main+mve.fp. OK for trunk? gcc/ChangeLog: PR target/107674 * conig/arm/arm.cc (arm_hard_regno_mode_ok): Use new MACRO. (arm_modes_tieable_p): Make MVE predicate modes tieable. * config/arm/arm.h (VALID_MVE_PRED_MODE): New define. * simplify-rtx.cc (simplify_context::simplify_subreg): Teach simplify_subreg to simplify subregs where the outermode is not scalar. gcc/testsuite/ChangeLog: * gcc.target/arm/mve/mve_vpt.c: Change to remove unecessary zero-extend. diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h index 6f7ecf9128047647fc41677e634cd9612a13242b..4352c830cb6d2e632a225edea861b5ceb35dd035 100644 --- a/gcc/config/arm/arm.h +++ b/gcc/config/arm/arm.h @@ -1091,6 +1091,10 @@ extern const int arm_arch_cde_coproc_bits[]; || (MODE) == V16QImode || (MODE) == V8HFmode || (MODE) == V4SFmode \ || (MODE) == V2DFmode) +#define VALID_MVE_PRED_MODE(MODE) \ + ((MODE) == HImode \ + || (MODE) == V16BImode || (MODE) == V8BImode || (MODE) == V4BImode) + #define VALID_MVE_SI_MODE(MODE) \ ((MODE) == V2DImode ||(MODE) == V4SImode || (MODE) == V8HImode \ || (MODE) == V16QImode) diff --git a/gcc/config/arm/arm.cc b/gcc/config/arm/arm.cc index 3f171188de513e258369397e4726afe27bd9fdbf..18460ef5280be8c1df85eff424a1bf66d6019c0a 100644 --- a/gcc/config/arm/arm.cc +++ b/gcc/config/arm/arm.cc @@ -25564,10 +25564,7 @@ arm_hard_regno_mode_ok (unsigned int regno, machine_mode mode) return false; if (IS_VPR_REGNUM (regno)) - return mode == HImode - || mode == V16BImode - || mode == V8BImode - || mode == V4BImode; + return VALID_MVE_PRED_MODE (mode); if (TARGET_THUMB1) /* For the Thumb we only allow values bigger than SImode in @@ -25646,6 +25643,10 @@ arm_modes_tieable_p (machine_mode mode1, machine_mode mode2) if (GET_MODE_CLASS (mode1) == GET_MODE_CLASS (mode2)) return true; + if (TARGET_HAVE_MVE + && (VALID_MVE_PRED_MODE (mode1) && VALID_MVE_PRED_MODE (mode2))) + return true; + /* We specifically want to allow elements of "structure" modes to be tieable to the structure. This more general condition allows other rarer situations too. */ diff --git a/gcc/simplify-rtx.cc b/gcc/simplify-rtx.cc index 7fb1e97fbea4e7b8b091f5724ebe0cb61eee7ec3..a951272186585c0a5cc3e0155285e7a635865f42 100644 --- a/gcc/simplify-rtx.cc +++ b/gcc/simplify-rtx.cc @@ -7652,6 +7652,22 @@ simplify_context::simplify_subreg (machine_mode outermode, rtx op, } } + /* Try simplifying a SUBREG expression of a non-integer OUTERMODE by using a + NEW_OUTERMODE of the same size instead, other simplifications rely on + integer to integer subregs and we'd potentially miss out on optimizations + otherwise. */ + if (known_gt (GET_MODE_SIZE (innermode), + GET_MODE_SIZE (outermode)) + && SCALAR_INT_MODE_P (innermode) + && !SCALAR_INT_MODE_P (outermode) + && int_mode_for_size (GET_MODE_BITSIZE (outermode), + 0).exists (&int_outermode)) + { + rtx tem = simplify_subreg (int_outermode, op, innermode, byte); + if (tem) + return simplify_gen_subreg (outermode, tem, GET_MODE (tem), byte); + } + /* If OP is a vector comparison and the subreg is not changing the number of elements or the size of the elements, change the result of the comparison to the new mode. */ diff --git a/gcc/testsuite/gcc.target/arm/mve/mve_vpt.c b/gcc/testsuite/gcc.target/arm/mve/mve_vpt.c index 26a565b79dd1348e361b3aa23a1d6e6d13bffce8..8e562a9f065eff157f63ebd5acf9af0a2155b5c5 100644 --- a/gcc/testsuite/gcc.target/arm/mve/mve_vpt.c +++ b/gcc/testsuite/gcc.target/arm/mve/mve_vpt.c @@ -16,9 +16,6 @@ void test0 (uint8_t *a, uint8_t *b, uint8_t *c) ** vldrb.8 q2, \[r0\] ** vldrb.8 q1, \[r1\] ** vcmp.i8 eq, q2, q1 -** vmrs r3, p0 @ movhi -** uxth r3, r3 -** vmsr p0, r3 @ movhi ** vpst ** vaddt.i8 q3, q2, q1 ** vpst