From patchwork Sun Jun 30 03:12:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Pan2" X-Patchwork-Id: 1954304 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=NIGtGjOH; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WBZ4w32qMz1xpP for ; Sun, 30 Jun 2024 13:13:34 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id A84EA38114EA for ; Sun, 30 Jun 2024 03:13:32 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by sourceware.org (Postfix) with ESMTPS id E592A389ECA6 for ; Sun, 30 Jun 2024 03:13:10 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org E592A389ECA6 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=intel.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org E592A389ECA6 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1719717193; cv=none; b=c2R9aNBY9vB0X7Dw+0B9Q9e0R9hlRynvz7LgTwP9Tftz2CSqQUsJXHwcNdg6zigBBCduniqIlkalZY7oLYxK+ErojOrn2QqGHCEVE2oi0va8fdKAhfx4GwCQXiobrn9ZTGGfrDRijkyaLmNAzwkKC08ifHnVtYYRXzidu43MuqU= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1719717193; c=relaxed/simple; bh=mrqy9DY6gOBnhpUZJ8Y1BmuKPJKecHLOwjRwlFaL+/0=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=ThNHZt7SkpMIOjC9XhqESUduQWfYovqD5Cv+IF3vaFuHFHmUwCv1x8YOZ3bTwrzjMAa6W6iY+qwi7SAH+Sj6PDoph6FDl6zt/t/y+EOw3mXHGhyhpkH8zYm50VqIb5VDNRHPB9bV0ulmg3bhyYlX3mAUtE53fJIMl0c+REv4pBU= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1719717191; x=1751253191; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=mrqy9DY6gOBnhpUZJ8Y1BmuKPJKecHLOwjRwlFaL+/0=; b=NIGtGjOHdO0MdMq3p4wQtkukbjDjj23A0D7qBysVzeDn+4y6nBuz/nX/ KQzw95dxvS+Aj6vQKdQsKsY+eOXrY7CKmarzbJ1fpMxIUw7KhD2NKEp0V Ykos/qsPUe4sG6UpBlvHeVqYboGrgLrfqR7sPsfIkdT3/ICpqeU6YNR+7 AVq+6cyCtsyFnl7FOOw1dQsA0oYO8cIe6Xl1yeP3LT4F3EcFAGyX2GI2G u+IJnp4fyI3UQFBzp6ZsWrTgcylvhZ/8NYaHq8SH56DnXgoF0wWTXmmBQ ZaS+liI0Dx0kjnKd8/uHNhi9XRxDHX3ji8sPwoxcbdjU0E1exZcj3E15D w==; X-CSE-ConnectionGUID: 8xX+ArjJT5GjcVoyNdhIsA== X-CSE-MsgGUID: kuyKCMKSRniTH9DuMZyTqw== X-IronPort-AV: E=McAfee;i="6700,10204,11118"; a="27548179" X-IronPort-AV: E=Sophos;i="6.09,173,1716274800"; d="scan'208";a="27548179" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2024 20:13:08 -0700 X-CSE-ConnectionGUID: ygPnpcboR02s6ndjFzWEyg== X-CSE-MsgGUID: rogb2EEwSqCQKyo0eT+X/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,173,1716274800"; d="scan'208";a="45809306" Received: from shvmail02.sh.intel.com ([10.239.244.9]) by orviesa007.jf.intel.com with ESMTP; 29 Jun 2024 20:13:05 -0700 Received: from pli-ubuntu.sh.intel.com (pli-ubuntu.sh.intel.com [10.239.159.47]) by shvmail02.sh.intel.com (Postfix) with ESMTP id 2E5A210050FF; Sun, 30 Jun 2024 11:13:05 +0800 (CST) From: pan2.li@intel.com To: gcc-patches@gcc.gnu.org Cc: juzhe.zhong@rivai.ai, kito.cheng@gmail.com, richard.guenther@gmail.com, tamar.christina@arm.com, jeffreyalaw@gmail.com, rdapp.gcc@gmail.com, Pan Li Subject: [PATCH v1] Vect: Distribute truncation into .SAT_SUB operands Date: Sun, 30 Jun 2024 11:12:59 +0800 Message-Id: <20240630031259.3505352-1-pan2.li@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Spam-Status: No, score=-11.5 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org From: Pan Li To get better vectorized code of .SAT_SUB, we would like to avoid the truncated operation for the assignment. For example, as below. unsigned int _1; unsigned int _2; _9 = (unsigned short int).SAT_SUB (_1, _2); If we make sure that the _1 is in the range of unsigned short int. Such as a def similar to: _1 = (unsigned short int)_4; Then we can do the distribute the truncation operation to: _3 = MIN_EXPR (_2, 65535); _9 = .SAT_SUB ((unsigned short int)_1, (unsigned short int)_3); Let's take RISC-V vector as example to tell the changes. For below sample code: __attribute__((noinline)) void test (uint16_t *x, unsigned b, unsigned n) { unsigned a = 0; uint16_t *p = x; do { a = *--p; *p = (uint16_t)(a >= b ? a - b : 0); } while (--n); } Before this patch: ... .L3: vle16.v v1,0(a3) vrsub.vx v5,v2,t1 mv t3,a4 addw a4,a4,t5 vrgather.vv v3,v1,v5 vsetvli zero,zero,e32,m1,ta,ma vzext.vf2 v1,v3 vssubu.vx v1,v1,a1 vsetvli zero,zero,e16,mf2,ta,ma vncvt.x.x.w v1,v1 vrgather.vv v3,v1,v5 vse16.v v3,0(a3) sub a3,a3,t4 bgtu t6,a4,.L3 ... After this patch: test: ... .L3: vle16.v v3,0(a3) vrsub.vx v5,v2,a6 mv a7,a4 addw a4,a4,t3 vrgather.vv v1,v3,v5 vssubu.vv v1,v1,v6 vrgather.vv v3,v1,v5 vse16.v v3,0(a3) sub a3,a3,t1 bgtu t4,a4,.L3 ... The below test suites are passed for this patch: 1. The rv64gcv fully regression tests. 2. The rv64gcv build with glibc. 3. The x86 bootstrap tests. 4. The x86 fully regression tests. gcc/ChangeLog: * tree-vect-patterns.cc (vect_recog_sat_sub_pattern_distribute): Add new func impl to perform the truncation distribution. (vect_recog_sat_sub_pattern): Perform above optimize before generate .SAT_SUB call. Signed-off-by: Pan Li --- gcc/tree-vect-patterns.cc | 73 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc index 519d15f2a43..7329ecec2c4 100644 --- a/gcc/tree-vect-patterns.cc +++ b/gcc/tree-vect-patterns.cc @@ -4565,6 +4565,77 @@ vect_recog_sat_add_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, return NULL; } +/* + * Try to distribute the truncation for .SAT_SUB pattern, mostly occurs in + * the benchmark zip. Aka: + * + * unsigned int _1; + * unsigned int _2; + * _9 = (unsigned short int).SAT_SUB (_1, _2); + * + * if _1 is known to be in the range of unsigned short int. For example + * there is a def _1 = (unsigned short int)_4. Then we can distribute the + * truncation to: + * + * _3 = MIN (65535, _2); + * _9 = .SAT_SUB ((unsigned short int)_1, (unsigned short int)_3); + * + * Then, we can better vectorized code and avoid the unnecessary narrowing + * stmt during vectorization. + */ +static void +vect_recog_sat_sub_pattern_distribute (vec_info *vinfo, + stmt_vec_info stmt_vinfo, + gimple *stmt, tree lhs, tree *ops) +{ + tree otype = TREE_TYPE (lhs); + tree itype = TREE_TYPE (ops[0]); + + if (types_compatible_p (otype, itype)) + return; + + unsigned itype_prec = TYPE_PRECISION (itype); + unsigned otype_prec = TYPE_PRECISION (otype); + + if (otype_prec >= itype_prec) + return; + + int_range_max r; + gimple_ranger granger; + + if (granger.range_of_expr (r, ops[0], stmt) && !r.undefined_p ()) + { + wide_int bound = r.upper_bound (); + wide_int otype_max = wi::mask (otype_prec, /* negate */false, itype_prec); + + if (bound != otype_max) + return; + + tree v_otype = get_vectype_for_scalar_type (vinfo, otype); + tree v_itype = get_vectype_for_scalar_type (vinfo, itype); + + /* 1. Build truncated op_0 */ + tree op_0_out = vect_recog_temp_ssa_var (otype, NULL); + gimple *op_0_cast = gimple_build_assign (op_0_out, NOP_EXPR, ops[0]); + append_pattern_def_seq (vinfo, stmt_vinfo, op_0_cast, v_otype); + + /* 2. Build MIN_EXPR (op_1, 65536) */ + tree max = wide_int_to_tree (itype, otype_max); + tree op_1_in = vect_recog_temp_ssa_var (itype, NULL); + gimple *op_1_min = gimple_build_assign (op_1_in, MIN_EXPR, ops[1], max); + append_pattern_def_seq (vinfo, stmt_vinfo, op_1_min, v_itype); + + /* 3. Build truncated op_1 */ + tree op_1_out = vect_recog_temp_ssa_var (otype, NULL); + gimple *op_1_cast = gimple_build_assign (op_1_out, NOP_EXPR, op_1_in); + append_pattern_def_seq (vinfo, stmt_vinfo, op_1_cast, v_otype); + + /* 4. Update the ops */ + ops[0] = op_0_out; + ops[1] = op_1_out; + } +} + /* * Try to detect saturation sub pattern (SAT_ADD), aka below gimple: * _7 = _1 >= _2; @@ -4590,6 +4661,8 @@ vect_recog_sat_sub_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, if (gimple_unsigned_integer_sat_sub (lhs, ops, NULL)) { + vect_recog_sat_sub_pattern_distribute (vinfo, stmt_vinfo, last_stmt, + lhs, ops); gimple *stmt = vect_recog_build_binary_gimple_stmt (vinfo, stmt_vinfo, IFN_SAT_SUB, type_out, lhs, ops[0], ops[1]);