From patchwork Fri Oct 18 11:18:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 1999075 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XVMls2jnTz1xth for ; Fri, 18 Oct 2024 22:23:53 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9B4BF385828E for ; Fri, 18 Oct 2024 11:23:51 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 5B9353858C35 for ; Fri, 18 Oct 2024 11:19:51 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5B9353858C35 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 5B9353858C35 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1729250397; cv=none; b=qwWTwXDQn/d6thxQwo1uP/klf8PJ7DtUZxQ1P9UuK+Q2DuWandSs8d+bpFu+5Kh8hQrpqbDkardIgpGedcLlvJAFHg9oP7lHIv/hqAXt0PlFKzlUjW+E7IS0CFpA3WVOCleQQgDVscOs4p83dyutiSd/0xKtKj7u3iEXEk3IVms= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1729250397; c=relaxed/simple; bh=ScTcro66IGOtvjcscjspMfCjWzcvaHm4TrI4YkDuLcw=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=TGU1/PDjF+ZoCjefTCEMGSh2GOIhCKfr9e4GNLUWes2qBOzya30/dAIuLW5QOiHCCAS/WMaeZMYIwWl+OIODU7TaLwTNaONWOO4PSTUchQS98gGDlKRKPsIevG/FLuS3EQSncDSO2CR9fIk3guAMIxpXpBtKce8EFzzHiZUwUMg= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B6E7BFEC; Fri, 18 Oct 2024 04:20:20 -0700 (PDT) Received: from e121540-lin.manchester.arm.com (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 800643F58B; Fri, 18 Oct 2024 04:19:50 -0700 (PDT) From: Richard Sandiford To: rguenther@suse.de, gcc-patches@gcc.gnu.org Cc: Richard Sandiford Subject: [PATCH 8/9] Try to simplify (X >> C1) * (C2 << C1) -> X * C2 Date: Fri, 18 Oct 2024 12:18:05 +0100 Message-Id: <20241018111806.4026759-9-richard.sandiford@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241018111806.4026759-1-richard.sandiford@arm.com> References: <20241018111806.4026759-1-richard.sandiford@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-18.3 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, KAM_NUMSUBJECT, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org This patch adds a rule to simplify (X >> C1) * (C2 << C1) -> X * C2 when the low C1 bits of X are known to be zero. As with the earlier X >> C1 << (C2 + C1) patch, any single conversion is allowed between the shift and the multiplication. gcc/ * match.pd: Simplify (X >> C1) * (C2 << C1) -> X * C2 if the low C1 bits of X are zero. gcc/testsuite/ * gcc.dg/tree-ssa/shifts-3.c: New test. * gcc.dg/tree-ssa/shifts-4.c: Likewise. * gcc.target/aarch64/sve/cnt_fold_5.c: Likewise. --- gcc/match.pd | 13 ++++ gcc/testsuite/gcc.dg/tree-ssa/shifts-3.c | 65 +++++++++++++++++++ gcc/testsuite/gcc.dg/tree-ssa/shifts-4.c | 23 +++++++ .../gcc.target/aarch64/sve/cnt_fold_5.c | 38 +++++++++++ 4 files changed, 139 insertions(+) create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/shifts-3.c create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/shifts-4.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cnt_fold_5.c diff --git a/gcc/match.pd b/gcc/match.pd index 41903554478..85f5eeefa08 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -4915,6 +4915,19 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) && wi::to_widest (@2) >= wi::to_widest (@1) && wi::to_widest (@1) <= wi::ctz (get_nonzero_bits (@0))) (lshift (convert @0) (minus @2 @1)))) + +/* (X >> C1) * (C2 << C1) -> X * C2 if the low C1 bits of X are zero. */ +(simplify + (mult (convert? (rshift (with_possible_nonzero_bits2 @0) INTEGER_CST@1)) + poly_int_tree_p@2) + (with { poly_widest_int factor; } + (if (INTEGRAL_TYPE_P (type) + && wi::ltu_p (wi::to_wide (@1), element_precision (type)) + && wi::to_widest (@1) <= wi::ctz (get_nonzero_bits (@0)) + && multiple_p (wi::to_poly_widest (@2), + widest_int (1) << tree_to_uhwi (@1), + &factor)) + (mult (convert @0) { wide_int_to_tree (type, factor); })))) #endif /* For (x << c) >> c, optimize into x & ((unsigned)-1 >> c) for diff --git a/gcc/testsuite/gcc.dg/tree-ssa/shifts-3.c b/gcc/testsuite/gcc.dg/tree-ssa/shifts-3.c new file mode 100644 index 00000000000..dcff518e630 --- /dev/null +++ b/gcc/testsuite/gcc.dg/tree-ssa/shifts-3.c @@ -0,0 +1,65 @@ +/* { dg-options "-O2 -fdump-tree-optimized-raw" } */ + +unsigned int +f1 (unsigned int x) +{ + if (x & 3) + __builtin_unreachable (); + x >>= 2; + return x * 20; +} + +unsigned int +f2 (unsigned int x) +{ + if (x & 3) + __builtin_unreachable (); + unsigned char y = x; + y >>= 2; + return y * 36; +} + +unsigned long +f3 (unsigned int x) +{ + if (x & 3) + __builtin_unreachable (); + x >>= 2; + return (unsigned long) x * 88; +} + +int +f4 (int x) +{ + if (x & 15) + __builtin_unreachable (); + x >>= 4; + return x * 48; +} + +unsigned int +f5 (int x) +{ + if (x & 31) + __builtin_unreachable (); + x >>= 5; + return x * 3200; +} + +unsigned int +f6 (unsigned int x) +{ + if (x & 1) + __builtin_unreachable (); + x >>= 1; + return x * (~0U / 3 & -2); +} + +/* { dg-final { scan-tree-dump-not {<[a-z]*_div_expr,} "optimized" } } */ +/* { dg-final { scan-tree-dump-not {>= 2; + return x * 10; +} + +unsigned int +f2 (unsigned int x) +{ + if (x & 3) + __builtin_unreachable (); + x >>= 3; + return x * 24; +} + +/* { dg-final { scan-tree-dump-times { + +/* +** f1: +** ... +** cntd [^\n]+ +** ... +** mul [^\n]+ +** ret +*/ +uint64_t +f1 (int x) +{ + if (x & 3) + __builtin_unreachable (); + x >>= 2; + return (uint64_t) x * svcnth (); +} + +/* +** f2: +** ... +** asr [^\n]+ +** ... +** ret +*/ +uint64_t +f2 (int x) +{ + if (x & 3) + __builtin_unreachable (); + x >>= 2; + return (uint64_t) x * svcntw (); +}