From patchwork Tue Oct 15 07:35:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jennifer Schmitz X-Patchwork-Id: 1997228 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=QP2m/RZY; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XSQrq1fkxz1xsc for ; Tue, 15 Oct 2024 18:36:26 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 48C4A3857B96 for ; Tue, 15 Oct 2024 07:36:24 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2060a.outbound.protection.outlook.com [IPv6:2a01:111:f403:2009::60a]) by sourceware.org (Postfix) with ESMTPS id 9DBF43858290 for ; Tue, 15 Oct 2024 07:35:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 9DBF43858290 Authentication-Results: sourceware.org; dmarc=fail (p=reject dis=none) header.from=nvidia.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=nvidia.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 9DBF43858290 Authentication-Results: server2.sourceware.org; arc=pass smtp.remote-ip=2a01:111:f403:2009::60a ARC-Seal: i=2; a=rsa-sha256; d=sourceware.org; s=key; t=1728977753; cv=pass; b=L8j3l714fQ9oQCfO4/qsZ+Jn24nLjq3ycum4oxQ6nsCo9cgMp+Fs4w81VZTAunDdjqYe0eYbD6UyA3aTXdgjovstmvQwIW0pnVTD0XXOd3LYw7IwuB+djUcmzv99U3lQrxOVsqZv/8G6CjSFXuF0QRHo9WyeSVMR5sUo9Falo9M= ARC-Message-Signature: i=2; a=rsa-sha256; d=sourceware.org; s=key; t=1728977753; c=relaxed/simple; bh=W7asqJaSwX1TWtkZPemcflA0mehJBvz61+qzdjW4wzw=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=VAwZMtofWVbbjnRlJ8tL6MEG/Ek9V4nPOYZVW3UvwM0PytA1ndwvv3kGbI1a4sXDbjwQvaftuSroReQCkMmDl8VKdrXV/1qSoYa95b4gksLPZe8fyXSxicSPNSGVaB1StLajSrWHune/GjvCRQzKN1h5NAPBu9Cdq6cP/kpjacA= ARC-Authentication-Results: i=2; server2.sourceware.org ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=sA4h5qHCj0ZHszvIqVM391g5sXxurSOQD44vd6aBn9lkFH12RiXiuAMZ+EcfzGU6ShMgRgQwe3Kwpr/fwzLk6awm+haSrSV8B5SpyYfZDfXTZ0YeHnV07/Yp4g56F7sw1KqVvPQSKPeJjBHQvR78jyY2ZuuDPaMV7gSnrV1mrJcSE5wcdraL0+PKNzmVwsqE6XaMPY4yvZqT3PmuPBhW+LaWpb/sctnyUfD7VYanRx6GevszjC5669sirmpyJj1ToN99hFtt42nAJ5pLekShhTp3tro/QclAJbtMoWjXQhRa2C3d0AZgRsr32ThnPWEkS6TXxYgr/9k4fnGzec6/ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=l6VRUjUa4aSAw/e9cl4Ged2PucMX081V6DaK6IilqYA=; b=LLnWvuhW3E8oJJ2DTMmGi8V6h3+lK7c1bo/rYbA+C9gA4sfqsHrXxQWqBpuPzRPGBjKtzNtWp72kzMRXD2LZOld7Xhvm69+v8xkPOT04/6Hx1jtHgq493Z/DDEPUFDNSKa+b0OQT+0ijRTQMLgErkfFifQfLNyub5/ugBF2Dt5aZMLDDaXQCTj6UpRqvAQfV5Y0F0L/F+rWorbZfknp0OEEVhWjC3Cana+3KmQLLAJcRxQWtFV1pqFnMWDb/L7Poz5Q1p4VHHhglo2IjtGUeQcyz8jP8dc02JgHY6yxxmbWXmsW3ybdgyYHBXIzYaDkxR9bgalfvHJyVshCQUrxBCg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=l6VRUjUa4aSAw/e9cl4Ged2PucMX081V6DaK6IilqYA=; b=QP2m/RZYcswPFMnl6Mk1Gm+rEFTUA0TgOvo2G1YpjaLNF8BTOV/oEKxhtKei2WfcYrD1QXngvCRrscaR85ZBaoSADUZN9iQTwv3A3eHErhMVis5j19QJBdSLfmp5pof+ypME5mkBs8+Y9QriTx+xPvC+FvAIPukPgMq+Hcumjm+ZZWoTEOnA+7Ee9NQxxy6A+hZ2anHUI8SZ+WJgWYVvdME+q3GGIxvSIE2/KgjRSRp39G09D9t9u9+4CzVjlNqgDqbyGuz/4aOERThXAVI1M0bEZ+RKqjMQuOOkDdc83plVWcB3yM/83v/bbxM7DyxZRQJjdKjv1bF/fGRprAo/UA== Received: from CH0PR12MB5252.namprd12.prod.outlook.com (2603:10b6:610:d3::24) by SA0PR12MB7075.namprd12.prod.outlook.com (2603:10b6:806:2d5::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.27; Tue, 15 Oct 2024 07:35:31 +0000 Received: from CH0PR12MB5252.namprd12.prod.outlook.com ([fe80::290b:293f:5cbd:9c9d]) by CH0PR12MB5252.namprd12.prod.outlook.com ([fe80::290b:293f:5cbd:9c9d%5]) with mapi id 15.20.8048.029; Tue, 15 Oct 2024 07:35:31 +0000 From: Jennifer Schmitz To: "gcc-patches@gcc.gnu.org" CC: Richard Sandiford , Kyrylo Tkachov Subject: [PATCH] SVE intrinsics: Add fold_active_lanes_to method to refactor svmul and svdiv. Thread-Topic: [PATCH] SVE intrinsics: Add fold_active_lanes_to method to refactor svmul and svdiv. Thread-Index: AQHbHtTQ7/G6z5LnxUi3uOLeyrQp2w== Date: Tue, 15 Oct 2024 07:35:31 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: CH0PR12MB5252:EE_|SA0PR12MB7075:EE_ x-ms-office365-filtering-correlation-id: 1167d6fc-db54-4d84-2c07-08dcecebf29f x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; ARA:13230040|376014|366016|1800799024|38070700018; x-microsoft-antispam-message-info: P67Bq7K3OS0YbVJrfUAbwW+Ur6K4YgBoCEmw3lwtM2ER1UiBdrYOc38FHnd7NysURnQBBJdTgxHOvyy+GlF+ZHq0TclDOgL/ADYLF7TvkBhU84LQ5Z23TNYwWZh3+yYBTYxeY9hn7rpEve5kEw5DJSg9A3CcvEnoyF0Z2Jg0M7QV4VmbMpMEX0P8wWtnHFIYI8i0Umjlp5l/rSnCEh9lhKrNu2f4FMqbIe+jhex5bK9+/0eflYKhPlYgPrjtKjURzhuClRK+OO06LVtShbw7ojUhLRnBsmllQNR9m+odlrl5nUjXpTpRw5mH/lT7/lZYH2HjO6LesBYTM4iRDcloZnRzJ9BNuRRAXKURGlpPSt5bqobqzexP1kkLheuen/v7v0/daGyW3rr15XswuJIslleWUPhOizUexBLH5RqaVs4VzviMrob29Bail1W7jjJ/pS2YRTlcQPdBCxxRtwCHZAFrHml14bFAGPhamVJUTtjbJkx3v1EZYKmLtBEUfuVD2oYurNPvYSiHzmm6DLUQpnRDtE4diZyYS4B/uk2zmZfQSX4Rv3fwAh7I0CaOa0Z1ggdCb43jUdY+itfpI+JoVIkDVjKoHiDxRwbKm5zGt77wbXAHqJMKjdkriwN7tKpCStoZDEteO65jx8jSORdSLrLWIhvMGoCNQjth2sQMKxSwy21fMf63qSAj/7jLAXKFS93ej/HhjU1cc46b5i7NiI6hVyydzhrk3OJSIUMwjESsuSLcLKHk9vS9mDiE3oZKORDsQ+TE6ic4t1pTOfn3MiU86+YDJKbkEm8Lknh28mWCmCY0LQXvl7wRD6/aXBJ8J/C7p2uEt5eIsR54UxOKBxP1kjgsFgWzPABkbOW5E5HPJLJ+7J9LXCQra9m4zGe1VFZ8wYmJrbUAzvzaP0zAagPjWv0Ka3gKkVExEyosKm4jM79UNKunS1XOMyQ0lKS/vhpCI5yBvikPQUlUZf1S/ge8mHwsMOIdP6FWjuXusCWcWRiEwPr52mR8txrzEM2SZqjoRtu7FUS2rRVLEElmCPFQqGj1Afy6UvhIG9Jo/qHWwARd5fTlzujduJld9FRtcMCbw4eGbVYqMqZYpiOlxUcEa8MGYQcpnGweREBS8DbIfBheO19MqZ0aowedW7yECZcH7FYic6j0ymKja7epFCGRQ8kJairl3wH9SMXNXd4rNMQWCjxslS67oXRtRqJd4woLKbFnZWYMCuux3+xtPV29IwHowQIBdngO8VmPnl4NbOsEWc1Ph4lxPJcfr0cZDii4/UFFHSzZXKT6gbUqF+Kx2bE1VIvcY+9VoY1MqpFbjmAahO6dwh73P4yzbKWHVEeJNA3TDyGgweraN9vmxQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CH0PR12MB5252.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230040)(376014)(366016)(1800799024)(38070700018); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: /jx2aKbafpVZMfT9edI707EvgiTqs3Gvkx9XxEz2TaWYFTb9ZWvk5KeWfkP/sTUf93we8PWUQZ+QbjcuqiXp0ERCfufXqw+PetjzzxU26AdDaP+dGkON/8MuliiGmRE9snDlFATWXUjEcuZMxR4OHceBZk9AdhDSkBGxWmQcNc8qd4dexxnqwci5yuhMI/leEtF9MfTFztG1vOMD1G76tJz5AlsKlQV6VmralU1Srq/iiA8N2v9Iz0j0wh6Pu33tv0J9whQC0tBVjLA6ZuYVjd4+ahDbIaA2xhPsBbDwYRBgEnmEivDxVCV1Y1OhYiCC+1NI7L7J5T7VXLdip0o+k7uAnkjllHBPKiw1ClHx+3XaphQKFrV9aWMZU2+SP3pKtIv1NYi6NpGmL7Cn98wDAJx9N/iVz/DguAxSCY+1YBErfTlAdhP1IP4is4RvHqpWC9/GWkfkqYZRspwhD7D4fJ2Hpy+2rkNOcWLfhYeCuBtYBfedtCqSir15YATIIU4jZh+5EDVkzQek3Mt6dEtaGjh8Epc3y6ZoyRMLxhpAyUlSMRY/lOEAfPDB9JyiyFE5KE+grxaVHf74moyHPm1HzTiB34dfvbWlamBQCNQIcsuUpKDFVB69KOxPGx4xiYghLCAQ7uSL6HPFR1xkQRR9gYoa4EerPJSZagMGdCb6PZqaS+/tTZtunhfA8NM/1GgIjo5pXK9ZI9GwI9OxrxkBFxBIP0yUPSE+wxWVs4PpBBieCo4L1xjYxnenuwdI6FpLbsb6nnE9K8lk5+8xaGdtgL4LIYhckzbMlevrTN5I56b7jg0urTFnVhH91KoixeH0npfui64H/uQKmTfYO6D4TkIXPdWQKMvAQAj/Rl7fUBj67Ouz48fIOnBd9+n9hDos1kHcYPS98yNyfxBnrQZPA3RwyHsIhmV5NjSwx+FmB26TGRKt/9HDcPhsYYhgjX8sbRDkoQ2nz2ncl/knww/8E3LMpgXK1rbnw9rqaAFUFUnCqYeFcpsPfTU7pbg1a5+cansbC6HxGoD3LjkvtWrvw7owXeVsEOiquUOp429fA9p7kcwIlrbaJr6ll0sRIXPGoeuYiMUmI2AdrAG6BcIyqq9uUDypoJHlhHmbMT+LbqfkEPWF2DNddpBwMTpVBqwXV/C/zVJd7t9Q3nI508Rbr+uu3I04ITTxZ5EINWgAPV3bg4KllfyBZQvTVvrMuJKwtIANZ8Oxsba3M4ssSw6rCJRBNWoVVhq29F/sHCKKsJqodBJRNPh+xl4s2avo/7oxcbG+6g/+oMqlrm3nsk6z9coxz+zVCtiar9GuRpLwG7utyrUpTqukt/DgmAaGgDGBpE2Zfp+sVJ8EbzIqFyEFzbMz+4+nauTkvVcJH2PJgBBWmr5AegI0MgSrnMdvV0bkifIDilA93axeLdYYMgYy4xmCqkW6Vc/OzCqgZnxbtMkrWF+tyjSXYt8rITodc9TVg95BE5RTCcvo3QZoJtnnXnZen3Yp3GFxpxehtI/PqrOuz1C55F06dLdyes4gR31Ev1Q6uPsb8eMuC1omwCnnG3s0VqUtcsAb3k7QrnSuQznwKNX1AgaQmHsibjjDXlpV MIME-Version: 1.0 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: CH0PR12MB5252.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1167d6fc-db54-4d84-2c07-08dcecebf29f X-MS-Exchange-CrossTenant-originalarrivaltime: 15 Oct 2024 07:35:31.1708 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: agVQnHPWs7SUjzFk51/TXAd6P6Llp+6G5o2NsNGQm/Zpx8fZ0+YEKSS0LIsf/wk16QukvzYavHRZY2ZsSIGvwQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB7075 X-Spam-Status: No, score=-9.5 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FORGED_SPF_HELO, GIT_PATCH_0, KAM_SHORT, SPF_HELO_PASS, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org As suggested in https://gcc.gnu.org/pipermail/gcc-patches/2024-September/663275.html, this patch adds the method gimple_folder::fold_active_lanes_to (tree X). This method folds active lanes to X and sets inactive lanes according to the predication, returning a new gimple statement. That makes folding of SVE intrinsics easier and reduces code duplication in the svxxx_impl::fold implementations. Using this new method, svdiv_impl::fold and svmul_impl::fold were refactored. Additionally, the method was used for two optimizations: 1) Fold svdiv to the dividend, if the divisor is all ones and 2) for svmul, if one of the operands is all ones, fold to the other operand. Both optimizations were previously applied to _x and _m predication on the RTL level, but not for _z, where svdiv/svmul were still being used. For both optimization, codegen was improved by this patch, for example by skipping sel instructions with all-same operands and replacing sel instructions by mov instructions. The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression. OK for mainline? Signed-off-by: Jennifer Schmitz gcc/ * config/aarch64/aarch64-sve-builtins-base.cc (svdiv_impl::fold): Refactor using fold_active_lanes_to and fold to dividend, is the divisor is all ones. (svmul_impl::fold): Refactor using fold_active_lanes_to and fold to the other operand, if one of the operands is all ones. * config/aarch64/aarch64-sve-builtins.h: Declare gimple_folder::fold_active_lanes_to (tree). * config/aarch64/aarch64-sve-builtins.cc (gimple_folder::fold_actives_lanes_to): Add new method to fold actives lanes to given argument and setting inactives lanes according to the predication. gcc/testsuite/ * gcc.target/aarch64/sve/acle/asm/div_s32.c: Adjust expected outcome. * gcc.target/aarch64/sve/acle/asm/div_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_u64.c: Likewise. * gcc.target/aarch64/sve/fold_div_zero.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s16.c: New test. * gcc.target/aarch64/sve/acle/asm/mul_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u8.c: Likewise. * gcc.target/aarch64/sve/mul_const_run.c: Likewise. --- .../aarch64/aarch64-sve-builtins-base.cc | 39 ++++++++--------- gcc/config/aarch64/aarch64-sve-builtins.cc | 27 ++++++++++++ gcc/config/aarch64/aarch64-sve-builtins.h | 1 + .../gcc.target/aarch64/sve/acle/asm/div_s32.c | 13 +++--- .../gcc.target/aarch64/sve/acle/asm/div_s64.c | 13 +++--- .../gcc.target/aarch64/sve/acle/asm/div_u32.c | 13 +++--- .../gcc.target/aarch64/sve/acle/asm/div_u64.c | 13 +++--- .../gcc.target/aarch64/sve/acle/asm/mul_s16.c | 43 +++++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_s32.c | 43 +++++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_s64.c | 43 +++++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_s8.c | 43 +++++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_u16.c | 43 +++++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_u32.c | 43 +++++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_u64.c | 43 +++++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_u8.c | 43 +++++++++++++++++-- .../gcc.target/aarch64/sve/fold_div_zero.c | 12 ++---- .../gcc.target/aarch64/sve/mul_const_run.c | 6 +++ 17 files changed, 387 insertions(+), 94 deletions(-) diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/config/aarch64/aarch64-sve-builtins-base.cc index 1c17149e1f0..70bd83005d7 100644 --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc @@ -758,18 +758,15 @@ public: if (auto *res = f.fold_const_binary (TRUNC_DIV_EXPR)) return res; - /* If the dividend is all zeros, fold to zero vector. */ + /* If the divisor is all ones, fold to dividend. */ tree op1 = gimple_call_arg (f.call, 1); - if (integer_zerop (op1)) - return gimple_build_assign (f.lhs, op1); - - /* If the divisor is all zeros, fold to zero vector. */ - tree pg = gimple_call_arg (f.call, 0); tree op2 = gimple_call_arg (f.call, 2); - if (integer_zerop (op2) - && (f.pred != PRED_m - || is_ptrue (pg, f.type_suffix (0).element_bytes))) - return gimple_build_assign (f.lhs, build_zero_cst (TREE_TYPE (f.lhs))); + if (integer_onep (op2)) + return f.fold_active_lanes_to (op1); + + /* If one of the operands is all zeros, fold to zero vector. */ + if (integer_zerop (op1) || integer_zerop (op2)) + return f.fold_active_lanes_to (build_zero_cst (TREE_TYPE (f.lhs))); /* If the divisor is a uniform power of 2, fold to a shift instruction. */ @@ -2024,20 +2021,21 @@ public: if (auto *res = f.fold_const_binary (MULT_EXPR)) return res; - /* If one of the operands is all zeros, fold to zero vector. */ + /* If one of the operands is all ones, fold to other operand. */ tree op1 = gimple_call_arg (f.call, 1); - if (integer_zerop (op1)) - return gimple_build_assign (f.lhs, op1); - - tree pg = gimple_call_arg (f.call, 0); tree op2 = gimple_call_arg (f.call, 2); - if (integer_zerop (op2) - && (f.pred != PRED_m - || is_ptrue (pg, f.type_suffix (0).element_bytes))) - return gimple_build_assign (f.lhs, build_zero_cst (TREE_TYPE (f.lhs))); + if (integer_onep (op1)) + return f.fold_active_lanes_to (op2); + if (integer_onep (op2)) + return f.fold_active_lanes_to (op1); + + /* If one of the operands is all zeros, fold to zero vector. */ + if (integer_zerop (op1) || integer_zerop (op2)) + return f.fold_active_lanes_to (build_zero_cst (TREE_TYPE (f.lhs))); /* If one of the operands is a uniform power of 2, fold to a left shift by immediate. */ + tree pg = gimple_call_arg (f.call, 0); tree op1_cst = uniform_integer_cst_p (op1); tree op2_cst = uniform_integer_cst_p (op2); tree shift_op1, shift_op2; @@ -2056,9 +2054,6 @@ public: else return NULL; - if (integer_onep (shift_op2)) - return NULL; - shift_op2 = wide_int_to_tree (unsigned_type_for (TREE_TYPE (shift_op2)), tree_log2 (shift_op2)); function_instance instance ("svlsl", functions::svlsl, diff --git a/gcc/config/aarch64/aarch64-sve-builtins.cc b/gcc/config/aarch64/aarch64-sve-builtins.cc index e7c703c987e..41673745cfe 100644 --- a/gcc/config/aarch64/aarch64-sve-builtins.cc +++ b/gcc/config/aarch64/aarch64-sve-builtins.cc @@ -3636,6 +3636,33 @@ gimple_folder::fold_const_binary (enum tree_code code) return NULL; } +/* Fold the active lanes to X and set the inactive lanes according to the + predication. Return the new statement. */ +gimple * +gimple_folder::fold_active_lanes_to (tree x) +{ + /* If predication is _x or the predicate is ptrue, fold to X. */ + if (pred == PRED_x + || is_ptrue (gimple_call_arg (call, 0), type_suffix (0).element_bytes)) + return gimple_build_assign (lhs, x); + + /* If the predication is _z or _m, calculate a vector that supplies the + values of inactive lanes (the first vector argument for m and a zero + vector from z). */ + tree vec_inactive; + if (pred == PRED_z) + vec_inactive = build_zero_cst (TREE_TYPE (lhs)); + else + vec_inactive = gimple_call_arg (call, 1); + if (operand_equal_p (x, vec_inactive, 0)) + return gimple_build_assign (lhs, x); + + gimple_seq stmts = NULL; + tree pred = convert_pred (stmts, vector_type (0), 0); + gsi_insert_seq_before (gsi, stmts, GSI_SAME_STMT); + return gimple_build_assign (lhs, VEC_COND_EXPR, pred, x, vec_inactive); +} + /* Try to fold the call. Return the new statement on success and null on failure. */ gimple * diff --git a/gcc/config/aarch64/aarch64-sve-builtins.h b/gcc/config/aarch64/aarch64-sve-builtins.h index 645e56badbe..4cdc0541bdc 100644 --- a/gcc/config/aarch64/aarch64-sve-builtins.h +++ b/gcc/config/aarch64/aarch64-sve-builtins.h @@ -637,6 +637,7 @@ public: gimple *fold_to_ptrue (); gimple *fold_to_vl_pred (unsigned int); gimple *fold_const_binary (enum tree_code); + gimple *fold_active_lanes_to (tree); gimple *fold (); diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c index d5a23bf0726..521f8bb4758 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c @@ -57,7 +57,6 @@ TEST_UNIFORM_ZX (div_w0_s32_m_untied, svint32_t, int32_t, /* ** div_1_s32_m_tied1: -** sel z0\.s, p0, z0\.s, z0\.s ** ret */ TEST_UNIFORM_Z (div_1_s32_m_tied1, svint32_t, @@ -66,7 +65,7 @@ TEST_UNIFORM_Z (div_1_s32_m_tied1, svint32_t, /* ** div_1_s32_m_untied: -** sel z0\.s, p0, z1\.s, z1\.s +** mov z0\.d, z1\.d ** ret */ TEST_UNIFORM_Z (div_1_s32_m_untied, svint32_t, @@ -217,9 +216,8 @@ TEST_UNIFORM_ZX (div_w0_s32_z_untied, svint32_t, int32_t, /* ** div_1_s32_z_tied1: -** mov (z[0-9]+\.s), #1 -** movprfx z0\.s, p0/z, z0\.s -** sdiv z0\.s, p0/m, z0\.s, \1 +** mov (z[0-9]+)\.b, #0 +** sel z0\.s, p0, z0\.s, \1\.s ** ret */ TEST_UNIFORM_Z (div_1_s32_z_tied1, svint32_t, @@ -228,9 +226,8 @@ TEST_UNIFORM_Z (div_1_s32_z_tied1, svint32_t, /* ** div_1_s32_z_untied: -** mov z0\.s, #1 -** movprfx z0\.s, p0/z, z0\.s -** sdivr z0\.s, p0/m, z0\.s, z1\.s +** mov (z[0-9]+)\.b, #0 +** sel z0\.s, p0, z1\.s, \1\.s ** ret */ TEST_UNIFORM_Z (div_1_s32_z_untied, svint32_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c index cfed6f9c1b3..1396c3c8191 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c @@ -57,7 +57,6 @@ TEST_UNIFORM_ZX (div_x0_s64_m_untied, svint64_t, int64_t, /* ** div_1_s64_m_tied1: -** sel z0\.d, p0, z0\.d, z0\.d ** ret */ TEST_UNIFORM_Z (div_1_s64_m_tied1, svint64_t, @@ -66,7 +65,7 @@ TEST_UNIFORM_Z (div_1_s64_m_tied1, svint64_t, /* ** div_1_s64_m_untied: -** sel z0\.d, p0, z1\.d, z1\.d +** mov z0\.d, z1\.d ** ret */ TEST_UNIFORM_Z (div_1_s64_m_untied, svint64_t, @@ -217,9 +216,8 @@ TEST_UNIFORM_ZX (div_x0_s64_z_untied, svint64_t, int64_t, /* ** div_1_s64_z_tied1: -** mov (z[0-9]+\.d), #1 -** movprfx z0\.d, p0/z, z0\.d -** sdiv z0\.d, p0/m, z0\.d, \1 +** mov (z[0-9]+)\.b, #0 +** sel z0\.d, p0, z0\.d, \1\.d ** ret */ TEST_UNIFORM_Z (div_1_s64_z_tied1, svint64_t, @@ -228,9 +226,8 @@ TEST_UNIFORM_Z (div_1_s64_z_tied1, svint64_t, /* ** div_1_s64_z_untied: -** mov z0\.d, #1 -** movprfx z0\.d, p0/z, z0\.d -** sdivr z0\.d, p0/m, z0\.d, z1\.d +** mov (z[0-9]+)\.b, #0 +** sel z0\.d, p0, z1\.d, \1\.d ** ret */ TEST_UNIFORM_Z (div_1_s64_z_untied, svint64_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c index 9707664caf4..423d0eac630 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c @@ -57,7 +57,6 @@ TEST_UNIFORM_ZX (div_w0_u32_m_untied, svuint32_t, uint32_t, /* ** div_1_u32_m_tied1: -** sel z0\.s, p0, z0\.s, z0\.s ** ret */ TEST_UNIFORM_Z (div_1_u32_m_tied1, svuint32_t, @@ -66,7 +65,7 @@ TEST_UNIFORM_Z (div_1_u32_m_tied1, svuint32_t, /* ** div_1_u32_m_untied: -** sel z0\.s, p0, z1\.s, z1\.s +** mov z0\.d, z1\.d ** ret */ TEST_UNIFORM_Z (div_1_u32_m_untied, svuint32_t, @@ -196,9 +195,8 @@ TEST_UNIFORM_ZX (div_w0_u32_z_untied, svuint32_t, uint32_t, /* ** div_1_u32_z_tied1: -** mov (z[0-9]+\.s), #1 -** movprfx z0\.s, p0/z, z0\.s -** udiv z0\.s, p0/m, z0\.s, \1 +** mov (z[0-9]+)\.b, #0 +** sel z0\.s, p0, z0\.s, \1\.s ** ret */ TEST_UNIFORM_Z (div_1_u32_z_tied1, svuint32_t, @@ -207,9 +205,8 @@ TEST_UNIFORM_Z (div_1_u32_z_tied1, svuint32_t, /* ** div_1_u32_z_untied: -** mov z0\.s, #1 -** movprfx z0\.s, p0/z, z0\.s -** udivr z0\.s, p0/m, z0\.s, z1\.s +** mov (z[0-9]+)\.b, #0 +** sel z0\.s, p0, z1\.s, \1\.s ** ret */ TEST_UNIFORM_Z (div_1_u32_z_untied, svuint32_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c index 5247ebdac7a..2103f4ce80f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c @@ -57,7 +57,6 @@ TEST_UNIFORM_ZX (div_x0_u64_m_untied, svuint64_t, uint64_t, /* ** div_1_u64_m_tied1: -** sel z0\.d, p0, z0\.d, z0\.d ** ret */ TEST_UNIFORM_Z (div_1_u64_m_tied1, svuint64_t, @@ -66,7 +65,7 @@ TEST_UNIFORM_Z (div_1_u64_m_tied1, svuint64_t, /* ** div_1_u64_m_untied: -** sel z0\.d, p0, z1\.d, z1\.d +** mov z0\.d, z1\.d ** ret */ TEST_UNIFORM_Z (div_1_u64_m_untied, svuint64_t, @@ -196,9 +195,8 @@ TEST_UNIFORM_ZX (div_x0_u64_z_untied, svuint64_t, uint64_t, /* ** div_1_u64_z_tied1: -** mov (z[0-9]+\.d), #1 -** movprfx z0\.d, p0/z, z0\.d -** udiv z0\.d, p0/m, z0\.d, \1 +** mov (z[0-9]+)\.b, #0 +** sel z0\.d, p0, z0\.d, \1\.d ** ret */ TEST_UNIFORM_Z (div_1_u64_z_tied1, svuint64_t, @@ -207,9 +205,8 @@ TEST_UNIFORM_Z (div_1_u64_z_tied1, svuint64_t, /* ** div_1_u64_z_untied: -** mov z0\.d, #1 -** movprfx z0\.d, p0/z, z0\.d -** udivr z0\.d, p0/m, z0\.d, z1\.d +** mov (z[0-9]+)\.b, #0 +** sel z0\.d, p0, z1\.d, \1\.d ** ret */ TEST_UNIFORM_Z (div_1_u64_z_untied, svuint64_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c index 52e35dc7f95..905c83904de 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c @@ -114,13 +114,22 @@ TEST_UNIFORM_Z (mul_intminnop2_s16_m_tied1, svint16_t, /* ** mul_1_s16_m_tied1: -** sel z0\.h, p0, z0\.h, z0\.h ** ret */ TEST_UNIFORM_Z (mul_1_s16_m_tied1, svint16_t, z0 = svmul_n_s16_m (p0, z0, 1), z0 = svmul_m (p0, z0, 1)) +/* +** mul_1op1_s16_m_tied2: +** mov (z[0-9]+\.h), #1 +** sel z0\.h, p0, z0\.h, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s16_m_tied2, svint16_t, + z0 = svmul_s16_m (p0, svdup_s16 (1), z0), + z0 = svmul_m (p0, svdup_s16 (1), z0)) + /* ** mul_3_s16_m_tied1: ** mov (z[0-9]+\.h), #3 @@ -305,15 +314,24 @@ TEST_UNIFORM_Z (mul_intminnop2_s16_z_tied1, svint16_t, /* ** mul_1_s16_z_tied1: -** mov z31.h, #1 -** movprfx z0.h, p0/z, z0.h -** mul z0.h, p0/m, z0.h, z31.h +** mov (z[0-9]+)\.b, #0 +** sel z0.h, p0, z0.h, \1\.h ** ret */ TEST_UNIFORM_Z (mul_1_s16_z_tied1, svint16_t, z0 = svmul_n_s16_z (p0, z0, 1), z0 = svmul_z (p0, z0, 1)) +/* +** mul_1op1_s16_z_tied2: +** mov (z[0-9]+)\.b, #0 +** sel z0.h, p0, z0.h, \1\.h +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s16_z_tied2, svint16_t, + z0 = svmul_s16_z (p0, svdup_s16 (1), z0), + z0 = svmul_z (p0, svdup_s16 (1), z0)) + /* ** mul_3_s16_z_tied1: ** mov (z[0-9]+\.h), #3 @@ -486,6 +504,23 @@ TEST_UNIFORM_Z (mul_1_s16_x_tied1, svint16_t, z0 = svmul_n_s16_x (p0, z0, 1), z0 = svmul_x (p0, z0, 1)) +/* +** mul_1op1_s16_x_tied2: +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s16_x_tied2, svint16_t, + z0 = svmul_s16_x (p0, svdup_s16 (1), z0), + z0 = svmul_x (p0, svdup_s16 (1), z0)) + +/* +** mul_1op1_s16_x_untied: +** mov z0\.d, z1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s16_x_untied, svint16_t, + z0 = svmul_s16_x (p0, svdup_s16 (1), z1), + z0 = svmul_x (p0, svdup_s16 (1), z1)) + /* ** mul_3_s16_x_tied1: ** mul z0\.h, z0\.h, #3 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c index 0974038e67f..eb8533729d7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c @@ -114,13 +114,22 @@ TEST_UNIFORM_Z (mul_intminnop2_s32_m_tied1, svint32_t, /* ** mul_1_s32_m_tied1: -** sel z0\.s, p0, z0\.s, z0\.s ** ret */ TEST_UNIFORM_Z (mul_1_s32_m_tied1, svint32_t, z0 = svmul_n_s32_m (p0, z0, 1), z0 = svmul_m (p0, z0, 1)) +/* +** mul_1op1_s32_m_tied2: +** mov (z[0-9]+\.s), #1 +** sel z0\.s, p0, z0\.s, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s32_m_tied2, svint32_t, + z0 = svmul_s32_m (p0, svdup_s32 (1), z0), + z0 = svmul_m (p0, svdup_s32 (1), z0)) + /* ** mul_3_s32_m_tied1: ** mov (z[0-9]+\.s), #3 @@ -305,15 +314,24 @@ TEST_UNIFORM_Z (mul_intminnop2_s32_z_tied1, svint32_t, /* ** mul_1_s32_z_tied1: -** mov z31.s, #1 -** movprfx z0.s, p0/z, z0.s -** mul z0.s, p0/m, z0.s, z31.s +** mov (z[0-9]+)\.b, #0 +** sel z0.s, p0, z0.s, \1\.s ** ret */ TEST_UNIFORM_Z (mul_1_s32_z_tied1, svint32_t, z0 = svmul_n_s32_z (p0, z0, 1), z0 = svmul_z (p0, z0, 1)) +/* +** mul_1op1_s32_z_tied2: +** mov (z[0-9]+)\.b, #0 +** sel z0\.s, p0, z0\.s, \1\.s +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s32_z_tied2, svint32_t, + z0 = svmul_s32_z (p0, svdup_s32 (1), z0), + z0 = svmul_z (p0, svdup_s32 (1), z0)) + /* ** mul_3_s32_z_tied1: ** mov (z[0-9]+\.s), #3 @@ -486,6 +504,23 @@ TEST_UNIFORM_Z (mul_1_s32_x_tied1, svint32_t, z0 = svmul_n_s32_x (p0, z0, 1), z0 = svmul_x (p0, z0, 1)) +/* +** mul_1op1_s32_x_tied2: +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s32_x_tied2, svint32_t, + z0 = svmul_s32_x (p0, svdup_s32 (1), z0), + z0 = svmul_x (p0, svdup_s32 (1), z0)) + +/* +** mul_1op1_s32_x_untied: +** mov z0\.d, z1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s32_x_untied, svint32_t, + z0 = svmul_s32_x (p0, svdup_s32 (1), z1), + z0 = svmul_x (p0, svdup_s32 (1), z1)) + /* ** mul_3_s32_x_tied1: ** mul z0\.s, z0\.s, #3 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c index 537eb0eef0b..a215dd96d23 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c @@ -114,13 +114,22 @@ TEST_UNIFORM_Z (mul_intminnop2_s64_m_tied1, svint64_t, /* ** mul_1_s64_m_tied1: -** sel z0\.d, p0, z0\.d, z0\.d ** ret */ TEST_UNIFORM_Z (mul_1_s64_m_tied1, svint64_t, z0 = svmul_n_s64_m (p0, z0, 1), z0 = svmul_m (p0, z0, 1)) +/* +** mul_1op1_s64_m_tied2: +** mov (z[0-9]+\.d), #1 +** sel z0\.d, p0, z0\.d, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s64_m_tied2, svint64_t, + z0 = svmul_s64_m (p0, svdup_s64 (1), z0), + z0 = svmul_m (p0, svdup_s64 (1), z0)) + /* ** mul_2_s64_m_tied1: ** lsl z0\.d, p0/m, z0\.d, #1 @@ -314,15 +323,24 @@ TEST_UNIFORM_Z (mul_intminnop2_s64_z_tied1, svint64_t, /* ** mul_1_s64_z_tied1: -** mov z31.d, #1 -** movprfx z0.d, p0/z, z0.d -** mul z0.d, p0/m, z0.d, z31.d +** mov (z[0-9]+)\.b, #0 +** sel z0\.d, p0, z0\.d, \1\.d ** ret */ TEST_UNIFORM_Z (mul_1_s64_z_tied1, svint64_t, z0 = svmul_n_s64_z (p0, z0, 1), z0 = svmul_z (p0, z0, 1)) +/* +** mul_1op1_s64_z_tied2: +** mov (z[0-9]+)\.b, #0 +** sel z0\.d, p0, z0\.d, \1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s64_z_tied2, svint64_t, + z0 = svmul_s64_z (p0, svdup_s64 (1), z0), + z0 = svmul_z (p0, svdup_s64 (1), z0)) + /* ** mul_2_s64_z_tied1: ** movprfx z0.d, p0/z, z0.d @@ -505,6 +523,23 @@ TEST_UNIFORM_Z (mul_1_s64_x_tied1, svint64_t, z0 = svmul_n_s64_x (p0, z0, 1), z0 = svmul_x (p0, z0, 1)) +/* +** mul_1op1_s64_x_tied2: +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s64_x_tied2, svint64_t, + z0 = svmul_s64_x (p0, svdup_s64 (1), z0), + z0 = svmul_x (p0, svdup_s64 (1), z0)) + +/* +** mul_1op1_s64_x_untied: +** mov z0\.d, z1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s64_x_untied, svint64_t, + z0 = svmul_s64_x (p0, svdup_s64 (1), z1), + z0 = svmul_x (p0, svdup_s64 (1), z1)) + /* ** mul_2_s64_x_tied1: ** add z0\.d, z0\.d, z0\.d diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c index 0def4bd4974..5c862c5c323 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c @@ -114,13 +114,22 @@ TEST_UNIFORM_Z (mul_intminnop2_s8_m_tied1, svint8_t, /* ** mul_1_s8_m_tied1: -** sel z0\.b, p0, z0\.b, z0\.b ** ret */ TEST_UNIFORM_Z (mul_1_s8_m_tied1, svint8_t, z0 = svmul_n_s8_m (p0, z0, 1), z0 = svmul_m (p0, z0, 1)) +/* +** mul_1op1_s8_m_tied2: +** mov (z[0-9]+)\.b, #1 +** sel z0\.b, p0, z0\.b, \1\.b +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s8_m_tied2, svint8_t, + z0 = svmul_s8_m (p0, svdup_s8 (1), z0), + z0 = svmul_m (p0, svdup_s8 (1), z0)) + /* ** mul_3_s8_m_tied1: ** mov (z[0-9]+\.b), #3 @@ -305,15 +314,24 @@ TEST_UNIFORM_Z (mul_intminnop2_s8_z_tied1, svint8_t, /* ** mul_1_s8_z_tied1: -** mov z31.b, #1 -** movprfx z0.b, p0/z, z0.b -** mul z0.b, p0/m, z0.b, z31.b +** mov (z[0-9]+\.b), #0 +** sel z0.b, p0, z0.b, \1 ** ret */ TEST_UNIFORM_Z (mul_1_s8_z_tied1, svint8_t, z0 = svmul_n_s8_z (p0, z0, 1), z0 = svmul_z (p0, z0, 1)) +/* +** mul_1op1_s8_z_tied2: +** mov (z[0-9]+)\.b, #0 +** sel z0\.b, p0, z0\.b, \1\.b +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s8_z_tied2, svint8_t, + z0 = svmul_s8_z (p0, svdup_s8 (1), z0), + z0 = svmul_z (p0, svdup_s8 (1), z0)) + /* ** mul_3_s8_z_tied1: ** mov (z[0-9]+\.b), #3 @@ -486,6 +504,23 @@ TEST_UNIFORM_Z (mul_1_s8_x_tied1, svint8_t, z0 = svmul_n_s8_x (p0, z0, 1), z0 = svmul_x (p0, z0, 1)) +/* +** mul_1op1_s8_x_tied2: +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s8_x_tied2, svint8_t, + z0 = svmul_s8_x (p0, svdup_s8 (1), z0), + z0 = svmul_x (p0, svdup_s8 (1), z0)) + +/* +** mul_1op1_s8_x_untied: +** mov z0\.d, z1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_s8_x_untied, svint8_t, + z0 = svmul_s8_x (p0, svdup_s8 (1), z1), + z0 = svmul_x (p0, svdup_s8 (1), z1)) + /* ** mul_3_s8_x_tied1: ** mul z0\.b, z0\.b, #3 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c index cc83123aacb..37b49aced59 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c @@ -105,13 +105,22 @@ TEST_UNIFORM_Z (mul_maxpownop2_u16_m_tied1, svuint16_t, /* ** mul_1_u16_m_tied1: -** sel z0\.h, p0, z0\.h, z0\.h ** ret */ TEST_UNIFORM_Z (mul_1_u16_m_tied1, svuint16_t, z0 = svmul_n_u16_m (p0, z0, 1), z0 = svmul_m (p0, z0, 1)) +/* +** mul_1op1_u16_m_tied2: +** mov (z[0-9]+\.h), #1 +** sel z0\.h, p0, z0\.h, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u16_m_tied2, svuint16_t, + z0 = svmul_u16_m (p0, svdup_u16 (1), z0), + z0 = svmul_m (p0, svdup_u16 (1), z0)) + /* ** mul_3_u16_m_tied1: ** mov (z[0-9]+\.h), #3 @@ -286,15 +295,24 @@ TEST_UNIFORM_Z (mul_maxpownop2_u16_z_tied1, svuint16_t, /* ** mul_1_u16_z_tied1: -** mov z31.h, #1 -** movprfx z0.h, p0/z, z0.h -** mul z0.h, p0/m, z0.h, z31.h +** mov (z[0-9]+)\.b, #0 +** sel z0.h, p0, z0.h, \1\.h ** ret */ TEST_UNIFORM_Z (mul_1_u16_z_tied1, svuint16_t, z0 = svmul_n_u16_z (p0, z0, 1), z0 = svmul_z (p0, z0, 1)) +/* +** mul_1op1_u16_z_tied2: +** mov (z[0-9]+)\.b, #0 +** sel z0.h, p0, z0.h, \1\.h +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u16_z_tied2, svuint16_t, + z0 = svmul_u16_z (p0, svdup_u16 (1), z0), + z0 = svmul_z (p0, svdup_u16 (1), z0)) + /* ** mul_3_u16_z_tied1: ** mov (z[0-9]+\.h), #3 @@ -458,6 +476,23 @@ TEST_UNIFORM_Z (mul_1_u16_x_tied1, svuint16_t, z0 = svmul_n_u16_x (p0, z0, 1), z0 = svmul_x (p0, z0, 1)) +/* +** mul_1op1_u16_x_tied2: +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u16_x_tied2, svuint16_t, + z0 = svmul_u16_x (p0, svdup_u16 (1), z0), + z0 = svmul_x (p0, svdup_u16 (1), z0)) + +/* +** mul_1op1_u16_x_untied: +** mov z0\.d, z1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u16_x_untied, svuint16_t, + z0 = svmul_u16_x (p0, svdup_u16 (1), z1), + z0 = svmul_x (p0, svdup_u16 (1), z1)) + /* ** mul_3_u16_x_tied1: ** mul z0\.h, z0\.h, #3 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c index 9d63731d019..bc379da8a89 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c @@ -105,13 +105,22 @@ TEST_UNIFORM_Z (mul_maxpownop2_u32_m_tied1, svuint32_t, /* ** mul_1_u32_m_tied1: -** sel z0\.s, p0, z0\.s, z0\.s ** ret */ TEST_UNIFORM_Z (mul_1_u32_m_tied1, svuint32_t, z0 = svmul_n_u32_m (p0, z0, 1), z0 = svmul_m (p0, z0, 1)) +/* +** mul_1op1_u32_m_tied2: +** mov (z[0-9]+\.s), #1 +** sel z0\.s, p0, z0\.s, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u32_m_tied2, svuint32_t, + z0 = svmul_u32_m (p0, svdup_u32 (1), z0), + z0 = svmul_m (p0, svdup_u32 (1), z0)) + /* ** mul_3_u32_m_tied1: ** mov (z[0-9]+\.s), #3 @@ -286,15 +295,24 @@ TEST_UNIFORM_Z (mul_maxpownop2_u32_z_tied1, svuint32_t, /* ** mul_1_u32_z_tied1: -** mov z31.s, #1 -** movprfx z0.s, p0/z, z0.s -** mul z0.s, p0/m, z0.s, z31.s +** mov (z[0-9]+)\.b, #0 +** sel z0.s, p0, z0.s, \1\.s ** ret */ TEST_UNIFORM_Z (mul_1_u32_z_tied1, svuint32_t, z0 = svmul_n_u32_z (p0, z0, 1), z0 = svmul_z (p0, z0, 1)) +/* +** mul_1op1_u32_z_tied2: +** mov (z[0-9]+)\.b, #0 +** sel z0\.s, p0, z0\.s, \1\.s +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u32_z_tied2, svuint32_t, + z0 = svmul_u32_z (p0, svdup_u32 (1), z0), + z0 = svmul_z (p0, svdup_u32 (1), z0)) + /* ** mul_3_u32_z_tied1: ** mov (z[0-9]+\.s), #3 @@ -458,6 +476,23 @@ TEST_UNIFORM_Z (mul_1_u32_x_tied1, svuint32_t, z0 = svmul_n_u32_x (p0, z0, 1), z0 = svmul_x (p0, z0, 1)) +/* +** mul_1op1_u32_x_tied2: +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u32_x_tied2, svuint32_t, + z0 = svmul_u32_x (p0, svdup_u32 (1), z0), + z0 = svmul_x (p0, svdup_u32 (1), z0)) + +/* +** mul_1op1_u32_x_untied: +** mov z0\.d, z1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u32_x_untied, svuint32_t, + z0 = svmul_u32_x (p0, svdup_u32 (1), z1), + z0 = svmul_x (p0, svdup_u32 (1), z1)) + /* ** mul_3_u32_x_tied1: ** mul z0\.s, z0\.s, #3 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c index 4f501df4fd5..324edbc3663 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c @@ -105,13 +105,22 @@ TEST_UNIFORM_Z (mul_maxpownop2_u64_m_tied1, svuint64_t, /* ** mul_1_u64_m_tied1: -** sel z0\.d, p0, z0\.d, z0\.d ** ret */ TEST_UNIFORM_Z (mul_1_u64_m_tied1, svuint64_t, z0 = svmul_n_u64_m (p0, z0, 1), z0 = svmul_m (p0, z0, 1)) +/* +** mul_1op1_u64_m_tied2: +** mov (z[0-9]+\.d), #1 +** sel z0\.d, p0, z0\.d, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u64_m_tied2, svuint64_t, + z0 = svmul_u64_m (p0, svdup_u64 (1), z0), + z0 = svmul_m (p0, svdup_u64 (1), z0)) + /* ** mul_2_u64_m_tied1: ** lsl z0\.d, p0/m, z0\.d, #1 @@ -295,15 +304,24 @@ TEST_UNIFORM_Z (mul_maxpownop2_u64_z_tied1, svuint64_t, /* ** mul_1_u64_z_tied1: -** mov z31.d, #1 -** movprfx z0.d, p0/z, z0.d -** mul z0.d, p0/m, z0.d, z31.d +** mov (z[0-9]+)\.b, #0 +** sel z0\.d, p0, z0\.d, \1\.d ** ret */ TEST_UNIFORM_Z (mul_1_u64_z_tied1, svuint64_t, z0 = svmul_n_u64_z (p0, z0, 1), z0 = svmul_z (p0, z0, 1)) +/* +** mul_1op1_u64_z_tied2: +** mov (z[0-9]+)\.b, #0 +** sel z0\.d, p0, z0\.d, \1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u64_z_tied2, svuint64_t, + z0 = svmul_u64_z (p0, svdup_u64 (1), z0), + z0 = svmul_z (p0, svdup_u64 (1), z0)) + /* ** mul_2_u64_z_tied1: ** movprfx z0.d, p0/z, z0.d @@ -477,6 +495,23 @@ TEST_UNIFORM_Z (mul_1_u64_x_tied1, svuint64_t, z0 = svmul_n_u64_x (p0, z0, 1), z0 = svmul_x (p0, z0, 1)) +/* +** mul_1op1_u64_x_tied2: +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u64_x_tied2, svuint64_t, + z0 = svmul_u64_x (p0, svdup_u64 (1), z0), + z0 = svmul_x (p0, svdup_u64 (1), z0)) + +/* +** mul_1op1_u64_x_untied: +** mov z0\.d, z1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u64_x_untied, svuint64_t, + z0 = svmul_u64_x (p0, svdup_u64 (1), z1), + z0 = svmul_x (p0, svdup_u64 (1), z1)) + /* ** mul_2_u64_x_tied1: ** add z0\.d, z0\.d, z0\.d diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c index e56fa6069b0..6a5ff3b88ea 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c @@ -105,13 +105,22 @@ TEST_UNIFORM_Z (mul_maxpownop2_u8_m_tied1, svuint8_t, /* ** mul_1_u8_m_tied1: -** sel z0\.b, p0, z0\.b, z0\.b ** ret */ TEST_UNIFORM_Z (mul_1_u8_m_tied1, svuint8_t, z0 = svmul_n_u8_m (p0, z0, 1), z0 = svmul_m (p0, z0, 1)) +/* +** mul_1op1_u8_m_tied2: +** mov (z[0-9]+)\.b, #1 +** sel z0\.b, p0, z0\.b, \1\.b +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u8_m_tied2, svuint8_t, + z0 = svmul_u8_m (p0, svdup_u8 (1), z0), + z0 = svmul_m (p0, svdup_u8 (1), z0)) + /* ** mul_3_u8_m_tied1: ** mov (z[0-9]+\.b), #3 @@ -286,15 +295,24 @@ TEST_UNIFORM_Z (mul_maxpownop2_u8_z_tied1, svuint8_t, /* ** mul_1_u8_z_tied1: -** mov z31.b, #1 -** movprfx z0.b, p0/z, z0.b -** mul z0.b, p0/m, z0.b, z31.b +** mov (z[0-9]+\.b), #0 +** sel z0.b, p0, z0.b, \1 ** ret */ TEST_UNIFORM_Z (mul_1_u8_z_tied1, svuint8_t, z0 = svmul_n_u8_z (p0, z0, 1), z0 = svmul_z (p0, z0, 1)) +/* +** mul_1op1_u8_z_tied2: +** mov (z[0-9]+)\.b, #0 +** sel z0\.b, p0, z0\.b, \1\.b +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u8_z_tied2, svuint8_t, + z0 = svmul_u8_z (p0, svdup_u8 (1), z0), + z0 = svmul_z (p0, svdup_u8 (1), z0)) + /* ** mul_3_u8_z_tied1: ** mov (z[0-9]+\.b), #3 @@ -458,6 +476,23 @@ TEST_UNIFORM_Z (mul_1_u8_x_tied1, svuint8_t, z0 = svmul_n_u8_x (p0, z0, 1), z0 = svmul_x (p0, z0, 1)) +/* +** mul_1op1_u8_x_tied2: +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u8_x_tied2, svuint8_t, + z0 = svmul_u8_x (p0, svdup_u8 (1), z0), + z0 = svmul_x (p0, svdup_u8 (1), z0)) + +/* +** mul_1op1_u8_x_untied: +** mov z0\.d, z1\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1op1_u8_x_untied, svuint8_t, + z0 = svmul_u8_x (p0, svdup_u8 (1), z1), + z0 = svmul_x (p0, svdup_u8 (1), z1)) + /* ** mul_3_u8_x_tied1: ** mul z0\.b, z0\.b, #3 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/fold_div_zero.c b/gcc/testsuite/gcc.target/aarch64/sve/fold_div_zero.c index 0dcd018cadc..8c854fca5c9 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/fold_div_zero.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/fold_div_zero.c @@ -85,8 +85,7 @@ svint64_t s64_z_pg_op2 (svbool_t pg, svint64_t op1) /* ** s64_m_pg_op2: -** mov (z[0-9]+)\.b, #0 -** sdiv (z[0-9]\.d), p[0-7]/m, \2, \1\.d +** mov z0\.d, p0/m, #0 ** ret */ svint64_t s64_m_pg_op2 (svbool_t pg, svint64_t op1) @@ -146,8 +145,7 @@ svint64_t s64_n_z_pg_op2 (svbool_t pg, svint64_t op1) /* ** s64_n_m_pg_op2: -** mov (z[0-9]+)\.b, #0 -** sdiv (z[0-9]+\.d), p[0-7]/m, \2, \1\.d +** mov z0\.d, p0/m, #0 ** ret */ svint64_t s64_n_m_pg_op2 (svbool_t pg, svint64_t op1) @@ -267,8 +265,7 @@ svuint64_t u64_z_pg_op2 (svbool_t pg, svuint64_t op1) /* ** u64_m_pg_op2: -** mov (z[0-9]+)\.b, #0 -** udiv (z[0-9]+\.d), p[0-7]/m, \2, \1\.d +** mov z0\.d, p0/m, #0 ** ret */ svuint64_t u64_m_pg_op2 (svbool_t pg, svuint64_t op1) @@ -328,8 +325,7 @@ svuint64_t u64_n_z_pg_op2 (svbool_t pg, svuint64_t op1) /* ** u64_n_m_pg_op2: -** mov (z[0-9]+)\.b, #0 -** udiv (z[0-9]+\.d), p[0-7]/m, \2, \1\.d +** mov z0\.d, p0/m, #0 ** ret */ svuint64_t u64_n_m_pg_op2 (svbool_t pg, svuint64_t op1) diff --git a/gcc/testsuite/gcc.target/aarch64/sve/mul_const_run.c b/gcc/testsuite/gcc.target/aarch64/sve/mul_const_run.c index 6af00439e39..c369d5be167 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/mul_const_run.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/mul_const_run.c @@ -10,6 +10,8 @@ typedef svfloat32_t svfloat32_ __attribute__((arm_sve_vector_bits(128))); typedef svfloat64_t svfloat64_ __attribute__((arm_sve_vector_bits(128))); typedef svint32_t svint32_ __attribute__((arm_sve_vector_bits(128))); typedef svint64_t svint64_ __attribute__((arm_sve_vector_bits(128))); +typedef svuint8_t svuint8_ __attribute__((arm_sve_vector_bits(128))); +typedef svuint16_t svuint16_ __attribute__((arm_sve_vector_bits(128))); typedef svuint32_t svuint32_ __attribute__((arm_sve_vector_bits(128))); typedef svuint64_t svuint64_ __attribute__((arm_sve_vector_bits(128))); @@ -84,6 +86,10 @@ typedef svuint64_t svuint64_ __attribute__((arm_sve_vector_bits(128))); TEST_VALUES_U_1 (64, 4, 7) \ TEST_VALUES_U_1 (32, 7, 3) \ TEST_VALUES_U_1 (64, 7, 3) \ + TEST_VALUES_U_1 (8, 1, 11) \ + TEST_VALUES_U_1 (16, 1, UINT16_MAX) \ + TEST_VALUES_U_1 (32, 1, 0) \ + TEST_VALUES_U_1 (64, 1, (1ULL << 63)) \ TEST_VALUES_U_1 (32, 11, 1) \ TEST_VALUES_U_1 (64, 11, 1)