From patchwork Wed Aug 14 09:01:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haochen Jiang X-Patchwork-Id: 1972269 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=aI3qpE0f; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WkMqW5cHjz1yXl for ; Wed, 14 Aug 2024 19:08:23 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id EDCC63861027 for ; Wed, 14 Aug 2024 09:08:21 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by sourceware.org (Postfix) with ESMTPS id 8B359385DDED for ; Wed, 14 Aug 2024 09:04:37 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 8B359385DDED Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=intel.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 8B359385DDED Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=198.175.65.9 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1723626282; cv=none; b=mamj7kEjnD3+Av0pdyF6fc4r3k+hOMBq6K12BXyiw0Ywf/OCj5oOJ73NidE+RIzMdkJLmaLP3TbJK/YvjYGn0FYOBKlQhRb1OGkn42CSTD8+d0NPPz/jqrX4jmf4jD6FynDd0KR6AZkbcE1h1ZSIIm6hAK8+rwHqTDEPhr1b/3I= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1723626282; c=relaxed/simple; bh=dbE1CwQ+vc8bOV71xzMLy9GYCsCPg0pIIldf2P871/E=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=OPC/bg4CU5hlG6kcKjRSzJ3aCXgmutKQ/njV+4lQN9jpPKbGy/PhHlbM3Etm0GR+dcWYDL+U2PUHZFJrT8RPVwV2PF/d+7I6Bch9gzlbWKCXsyp+KO7rChLOaXsKTgqWEdAGKIS9aA8F/p1ujI0eaCBqvUMtOXosz3g6HfBSSFA= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723626278; x=1755162278; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dbE1CwQ+vc8bOV71xzMLy9GYCsCPg0pIIldf2P871/E=; b=aI3qpE0fYRZhjYnTsnvsPzXyZudlubTDV5swrPQnB8ZivJ7hpRR1voD8 2Ih9ptww2pf/duEMAJi5MgaCStCw2I7DxC9djTJQrmjZqqTa7ugXMpeXB Czwa7/LusdEVge73+Qc0h8zTShpMar2vODKD5J64rWkrgqcXxcGr4zSI2 ZVNUNQJ0lKONJSqvbDnuggoj77z2/XXSVbQVPtgFH1gQsFQgZb3QqLKiO /Lb/ujIISQaz88Ppfk8a2BgiBIO6oIQxLMkHPnVRiPxrqoDPtUjtEY852 vP2f69MQUa+azH16/vZpXmKtWWNOaXGbO7sE1U32wNQ17UX62y+YA3hE7 Q==; X-CSE-ConnectionGUID: PYC3Ov9eTpiiSdZWsQCMTA== X-CSE-MsgGUID: VGUJ0uKoTamrfFaLSr9tDw== X-IronPort-AV: E=McAfee;i="6700,10204,11163"; a="44349181" X-IronPort-AV: E=Sophos;i="6.09,288,1716274800"; d="scan'208";a="44349181" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Aug 2024 02:04:11 -0700 X-CSE-ConnectionGUID: BxFjwn3iTn67FV2k8iHUmA== X-CSE-MsgGUID: 6UfL5qHpT+2qejjj2Uv1Rw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,288,1716274800"; d="scan'208";a="59519183" Received: from shvmail03.sh.intel.com ([10.239.245.20]) by orviesa007.jf.intel.com with ESMTP; 14 Aug 2024 02:04:07 -0700 Received: from shliclel4217.sh.intel.com (shliclel4217.sh.intel.com [10.239.240.127]) by shvmail03.sh.intel.com (Postfix) with ESMTP id 6953C100734A; Wed, 14 Aug 2024 17:04:01 +0800 (CST) From: Haochen Jiang To: gcc-patches@gcc.gnu.org Cc: hongtao.liu@intel.com, ubizjak@gmail.com, "Hu, Lin1" Subject: [PATCH 11/22] AVX10.2 ymm rounding: Support vfc{madd, mul}cph, vfixupimmp{s, d} intrins Date: Wed, 14 Aug 2024 17:01:48 +0800 Message-Id: <20240814090159.422097-12-haochen.jiang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240814090159.422097-1-haochen.jiang@intel.com> References: <20240814090159.422097-1-haochen.jiang@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-10.6 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org From: "Hu, Lin1" gcc/ChangeLog: * config/i386/avx10_2roundingintrin.h: New intrins. * config/i386/i386-builtin-types.def: Add new DEF_FUNCTION_TYPE. * config/i386/i386-builtin.def (BDESC): Add new builtins. * config/i386/i386-expand.cc (ix86_expand_round_builtin): Handle V16HF_FTYPE_V16HF_V16HF_INT, V16HF_FTYPE_V16HF_V16HF_V16HF_INT, V16HF_FTYPE_V16HF_V16HF_V16HF_UQI_INT, V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI_INT, V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI_INT. * config/i386/sse.md: (_fixupimm): Add condition check. (_fixupimm_mask): Ditto. gcc/testsuite/ChangeLog: * gcc.target/i386/avx-1.c: Add new builtin test. * gcc.target/i386/sse-13.c: Ditto. * gcc.target/i386/sse-14.c: Ditto. * gcc.target/i386/sse-22.c: Add new macro test. * gcc.target/i386/sse-23.c: Ditto. * gcc.target/i386/avx10_2-rounding-3.c: New test. --- gcc/config/i386/avx10_2roundingintrin.h | 247 ++++++++++++++++++ gcc/config/i386/i386-builtin-types.def | 5 + gcc/config/i386/i386-builtin.def | 10 + gcc/config/i386/i386-expand.cc | 5 + gcc/config/i386/sse.md | 4 +- gcc/testsuite/gcc.target/i386/avx-1.c | 10 + .../gcc.target/i386/avx10_2-rounding-3.c | 49 ++++ gcc/testsuite/gcc.target/i386/sse-13.c | 10 + gcc/testsuite/gcc.target/i386/sse-14.c | 13 + gcc/testsuite/gcc.target/i386/sse-22.c | 13 + gcc/testsuite/gcc.target/i386/sse-23.c | 10 + 11 files changed, 374 insertions(+), 2 deletions(-) diff --git a/gcc/config/i386/avx10_2roundingintrin.h b/gcc/config/i386/avx10_2roundingintrin.h index 15ea46b5983..d5ea6bc57da 100644 --- a/gcc/config/i386/avx10_2roundingintrin.h +++ b/gcc/config/i386/avx10_2roundingintrin.h @@ -1934,6 +1934,164 @@ _mm256_maskz_div_round_ps (__mmask8 __U, __m256 __A, __m256 __B, (__mmask8) __U, __R); } +extern __inline __m256h +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_fcmadd_round_pch (__m256h __A, __m256h __B, __m256h __D, const int __R) +{ + return (__m256h) __builtin_ia32_vfcmaddcph256_round ((__v16hf) __A, + (__v16hf) __B, + (__v16hf) __D, + __R); +} + +extern __inline __m256h +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_mask_fcmadd_round_pch (__m256h __A, __mmask8 __U, __m256h __B, + __m256h __D, const int __R) +{ + return (__m256h) __builtin_ia32_vfcmaddcph256_mask_round ((__v16hf) __A, + (__v16hf) __B, + (__v16hf) __D, + __U, + __R); +} + +extern __inline __m256h +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_mask3_fcmadd_round_pch (__m256h __A, __m256h __B, __m256h __D, + __mmask8 __U, const int __R) +{ + return (__m256h) __builtin_ia32_vfcmaddcph256_mask3_round ((__v16hf) __A, + (__v16hf) __B, + (__v16hf) __D, + __U, + __R); +} + +extern __inline __m256h +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_maskz_fcmadd_round_pch (__mmask8 __U, __m256h __A, __m256h __B, + __m256h __D, const int __R) +{ + return (__m256h) __builtin_ia32_vfcmaddcph256_maskz_round ((__v16hf) __A, + (__v16hf) __B, + (__v16hf) __D, + __U, + __R); +} + +extern __inline __m256h +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_fcmul_round_pch (__m256h __A, __m256h __B, const int __R) +{ + return + (__m256h) __builtin_ia32_vfcmulcph256_round ((__v16hf) __A, + (__v16hf) __B, + __R); +} + +extern __inline __m256h +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_mask_fcmul_round_pch (__m256h __W, __mmask8 __U, __m256h __A, + __m256h __B, const int __R) +{ + return (__m256h) __builtin_ia32_vfcmulcph256_mask_round ((__v16hf) __A, + (__v16hf) __B, + (__v16hf) __W, + (__mmask16) __U, + __R); +} + +extern __inline __m256h +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_maskz_fcmul_round_pch (__mmask8 __U, __m256h __A, __m256h __B, + const int __R) +{ + return (__m256h) __builtin_ia32_vfcmulcph256_mask_round ((__v16hf) __A, + (__v16hf) __B, + (__v16hf) + _mm256_setzero_ph (), + (__mmask16) __U, + __R); +} + +extern __inline __m256d +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_fixupimm_round_pd (__m256d __A, __m256d __B, __m256i __D, + const int __C, const int __R) +{ + return (__m256d) __builtin_ia32_fixupimmpd256_mask_round ((__v4df) __A, + (__v4df) __B, + (__v4di) __D, + __C, + (__mmask8) -1, + __R); +} + +extern __inline __m256d +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_mask_fixupimm_round_pd (__m256d __A, __mmask8 __U, __m256d __B, + __m256i __D, const int __C, const int __R) +{ + return (__m256d) __builtin_ia32_fixupimmpd256_mask_round ((__v4df) __A, + (__v4df) __B, + (__v4di) __D, + __C, + (__mmask8) __U, + __R); +} + +extern __inline __m256d +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_maskz_fixupimm_round_pd (__mmask8 __U, __m256d __A, __m256d __B, + __m256i __D, const int __C, const int __R) +{ + return (__m256d) __builtin_ia32_fixupimmpd256_maskz_round ((__v4df) __A, + (__v4df) __B, + (__v4di) __D, + __C, + (__mmask8) __U, + __R); +} + +extern __inline __m256 +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_fixupimm_round_ps (__m256 __A, __m256 __B, __m256i __D, const int __C, + const int __R) +{ + return (__m256) __builtin_ia32_fixupimmps256_mask_round ((__v8sf) __A, + (__v8sf) __B, + (__v8si) __D, + __C, + (__mmask8) -1, + __R); +} + +extern __inline __m256 +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_mask_fixupimm_round_ps (__m256 __A, __mmask8 __U, __m256 __B, + __m256i __D, const int __C, const int __R) +{ + return (__m256) __builtin_ia32_fixupimmps256_mask_round ((__v8sf) __A, + (__v8sf) __B, + (__v8si) __D, + __C, + (__mmask8) __U, + __R); +} + +extern __inline __m256 +__attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) +_mm256_maskz_fixupimm_round_ps (__mmask8 __U, __m256 __A, __m256 __B, + __m256i __D, const int __C, const int __R) +{ + return (__m256) __builtin_ia32_fixupimmps256_maskz_round ((__v8sf) __A, + (__v8sf) __B, + (__v8si) __D, + __C, + (__mmask8) __U, + __R); +} #else #define _mm256_add_round_pd(A, B, R) \ ((__m256d) __builtin_ia32_addpd256_mask_round ((__v4df) (A), \ @@ -3088,8 +3246,97 @@ _mm256_maskz_div_round_ps (__mmask8 __U, __m256 __A, __m256 __B, (_mm256_setzero_ps ()), \ (__mmask8) (U), \ (R))) + +#define _mm256_fcmadd_round_pch(A, B, D, R) \ + (__m256h) __builtin_ia32_vfcmaddcph256_round ((A), (B), (D), (R)) + +#define _mm256_mask_fcmadd_round_pch(A, U, B, D, R) \ + ((__m256h) __builtin_ia32_vfcmaddcph256_mask_round ((__v16hf)(A), \ + (__v16hf)(B), \ + (__v16hf)(D), \ + (U), (R))) + +#define _mm256_mask3_fcmadd_round_pch(A, B, D, U, R) \ + ((__m256h) __builtin_ia32_vfcmaddcph256_mask3_round ((A), (B), (D), (U), (R))) + +#define _mm256_maskz_fcmadd_round_pch(U, A, B, D, R) \ + ((__m256h) __builtin_ia32_vfcmaddcph256_maskz_round ((A), (B), (D), (U), (R))) + +#define _mm256_fcmul_round_pch(A, B, R) \ + ((__m256h) __builtin_ia32_vfcmulcph256_round ((__v16hf) (A), \ + (__v16hf) (B), \ + (R))) + +#define _mm256_mask_fcmul_round_pch(W, U, A, B, R) \ + ((__m256h) __builtin_ia32_vfcmulcph256_mask_round ((__v16hf) (A), \ + (__v16hf) (B), \ + (__v16hf) (W), \ + (__mmask16) (U), \ + (R))) + +#define _mm256_maskz_fcmul_round_pch(U, A, B, R) \ + ((__m256h) __builtin_ia32_vfcmulcph256_mask_round ((__v16hf) (A), \ + (__v16hf) (B), \ + (__v16hf) \ + (_mm256_setzero_ph ()), \ + (__mmask16) (U), \ + (R))) + +#define _mm256_fixupimm_round_pd(A, B, D, C, R) \ + ((__m256d) __builtin_ia32_fixupimmpd256_mask_round ((__v4df) (A), \ + (__v4df) (B), \ + (__v4di) (D), \ + (C), \ + (__mmask8) (-1), \ + (R))) + +#define _mm256_mask_fixupimm_round_pd(A, U, B, D, C, R)\ + ((__m256d) __builtin_ia32_fixupimmpd256_mask_round ((__v4df) (A), \ + (__v4df) (B), \ + (__v4di) (D), \ + (C), \ + (__mmask8) (U), \ + (R))) + +#define _mm256_maskz_fixupimm_round_pd(U, A, B, D, C, R)\ + ((__m256d) __builtin_ia32_fixupimmpd256_maskz_round ((__v4df) (A), \ + (__v4df) (B), \ + (__v4di) (D), \ + (C), \ + (__mmask8) (U), \ + (R))) + +#define _mm256_fixupimm_round_ps(A, B, D, C, R)\ + ((__m256) __builtin_ia32_fixupimmps256_mask_round ((__v8sf) (A), \ + (__v8sf) (B), \ + (__v8si) (D), \ + (C), \ + (__mmask8) (-1), \ + (R))) + +#define _mm256_mask_fixupimm_round_ps(A, U, B, D, C, R)\ + ((__m256) __builtin_ia32_fixupimmps256_mask_round ((__v8sf) (A), \ + (__v8sf) (B), \ + (__v8si) (D), \ + (C), \ + (__mmask8) (U), \ + (R))) + +#define _mm256_maskz_fixupimm_round_ps(U, A, B, D, C, R)\ + ((__m256) __builtin_ia32_fixupimmps256_maskz_round ((__v8sf) (A), \ + (__v8sf) (B), \ + (__v8si) (D), \ + (C), \ + (__mmask8) (U), \ + (R))) #endif +#define _mm256_cmul_round_pch(A, B, R) _mm256_fcmul_round_pch ((A), (B), (R)) +#define _mm256_mask_cmul_round_pch(W, U, A, B, R) \ + _mm256_mask_fcmul_round_pch ((W), (U), (A), (B), (R)) +#define _mm256_maskz_cmul_round_pch(U, A, B, R) \ + _mm256_maskz_fcmul_round_pch ((U), (A), (B), (R)) + #ifdef __DISABLE_AVX10_2_256__ #undef __DISABLE_AVX10_2_256__ #pragma GCC pop_options diff --git a/gcc/config/i386/i386-builtin-types.def b/gcc/config/i386/i386-builtin-types.def index c89c6e4021b..fc5c65e8b05 100644 --- a/gcc/config/i386/i386-builtin-types.def +++ b/gcc/config/i386/i386-builtin-types.def @@ -1440,3 +1440,8 @@ DEF_FUNCTION_TYPE (V4DF, V4DI, V4DF, UQI, INT) DEF_FUNCTION_TYPE (V8HF, V4DI, V8HF, UQI, INT) DEF_FUNCTION_TYPE (V4SF, V4DI, V4SF, UQI, INT) DEF_FUNCTION_TYPE (V16HF, V16HI, V16HF, UHI, INT) +DEF_FUNCTION_TYPE (V16HF, V16HF, V16HF, V16HF, INT) +DEF_FUNCTION_TYPE (V16HF, V16HF, V16HF, V16HF, UQI, INT) +DEF_FUNCTION_TYPE (V4DF, V4DF, V4DF, V4DI, INT, UQI, INT) +DEF_FUNCTION_TYPE (V8SF, V8SF, V8SF, V8SI, INT, UQI, INT) +DEF_FUNCTION_TYPE (V16HF, V16HF, V16HF, INT) diff --git a/gcc/config/i386/i386-builtin.def b/gcc/config/i386/i386-builtin.def index 8e655726775..55a644a67bb 100644 --- a/gcc/config/i386/i386-builtin.def +++ b/gcc/config/i386/i386-builtin.def @@ -3375,6 +3375,16 @@ BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512fp16_vcvtw2ph_v16hi_mask_ BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx_divv4df3_mask_round, "__builtin_ia32_divpd256_mask_round", IX86_BUILTIN_VDIVPD256_MASK_ROUND, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DF_V4DF_UQI_INT) BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512fp16_divv16hf3_mask_round, "__builtin_ia32_divph256_mask_round", IX86_BUILTIN_VDIVPH256_MASK_ROUND, UNKNOWN, (int) V16HF_FTYPE_V16HF_V16HF_V16HF_UHI_INT) BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx_divv8sf3_mask_round, "__builtin_ia32_divps256_mask_round", IX86_BUILTIN_VDIVPS256_MASK_ROUND, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SF_V8SF_UQI_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_fma_fcmaddc_v16hf_round, "__builtin_ia32_vfcmaddcph256_round", IX86_BUILTIN_VFCMADDCPH256_ROUND, UNKNOWN, (int) V16HF_FTYPE_V16HF_V16HF_V16HF_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512vl_fcmaddc_v16hf_mask1_round, "__builtin_ia32_vfcmaddcph256_mask_round", IX86_BUILTIN_VFCMADDCPH256_MASK_ROUND, UNKNOWN, (int) V16HF_FTYPE_V16HF_V16HF_V16HF_UQI_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512vl_fcmaddc_v16hf_mask_round, "__builtin_ia32_vfcmaddcph256_mask3_round", IX86_BUILTIN_VFCMADDCPH256_MASK3_ROUND, UNKNOWN, (int) V16HF_FTYPE_V16HF_V16HF_V16HF_UQI_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512vl_fcmaddc_v16hf_maskz_round, "__builtin_ia32_vfcmaddcph256_maskz_round", IX86_BUILTIN_VFCMADDCPH256_MASKZ_ROUND, UNKNOWN, (int) V16HF_FTYPE_V16HF_V16HF_V16HF_UQI_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512vl_fcmulc_v16hf_round, "__builtin_ia32_vfcmulcph256_round", IX86_BUILTIN_VFCMULCPH256_ROUND, UNKNOWN, (int) V16HF_FTYPE_V16HF_V16HF_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512vl_fcmulc_v16hf_mask_round, "__builtin_ia32_vfcmulcph256_mask_round", IX86_BUILTIN_VFCMULCPH256_MASK_ROUND, UNKNOWN, (int) V16HF_FTYPE_V16HF_V16HF_V16HF_UHI_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512vl_fixupimmv4df_mask_round, "__builtin_ia32_fixupimmpd256_mask_round", IX86_BUILTIN_VFIXUPIMMPD256_MASK_ROUND, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512vl_fixupimmv4df_maskz_round, "__builtin_ia32_fixupimmpd256_maskz_round", IX86_BUILTIN_VFIXUPIMMPD256_MASKZ_ROUND, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512vl_fixupimmv8sf_mask_round, "__builtin_ia32_fixupimmps256_mask_round", IX86_BUILTIN_VFIXUPIMMPS256_MASK_ROUND, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI_INT) +BDESC (0, OPTION_MASK_ISA2_AVX10_2_256, CODE_FOR_avx512vl_fixupimmv8sf_maskz_round, "__builtin_ia32_fixupimmps256_maskz_round", IX86_BUILTIN_VFIXUPIMMPS256_MASKZ_ROUND, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI_INT) BDESC_END (ROUND_ARGS, MULTI_ARG) diff --git a/gcc/config/i386/i386-expand.cc b/gcc/config/i386/i386-expand.cc index 613e3be7ce3..95b0917a677 100644 --- a/gcc/config/i386/i386-expand.cc +++ b/gcc/config/i386/i386-expand.cc @@ -12426,6 +12426,7 @@ ix86_expand_round_builtin (const struct builtin_description *d, nargs = 2; break; case V32HF_FTYPE_V32HF_V32HF_INT: + case V16HF_FTYPE_V16HF_V16HF_INT: case V8HF_FTYPE_V8HF_V8HF_INT: case V8HF_FTYPE_V8HF_INT_INT: case V8HF_FTYPE_V8HF_UINT_INT: @@ -12461,6 +12462,7 @@ ix86_expand_round_builtin (const struct builtin_description *d, case V16SF_FTYPE_V16SI_V16SF_HI_INT: case V16SI_FTYPE_V16SF_V16SI_HI_INT: case V16SI_FTYPE_V16HF_V16SI_UHI_INT: + case V16HF_FTYPE_V16HF_V16HF_V16HF_INT: case V16HF_FTYPE_V16SI_V16HF_UHI_INT: case V16HI_FTYPE_V16HF_V16HI_UHI_INT: case V8DF_FTYPE_V8SF_V8DF_QI_INT: @@ -12507,6 +12509,7 @@ ix86_expand_round_builtin (const struct builtin_description *d, case V8SF_FTYPE_V8SF_V8SF_V8SF_UQI_INT: case V16SF_FTYPE_V16SF_V16SF_V16SF_HI_INT: case V16HF_FTYPE_V16HF_V16HF_V16HF_UHI_INT: + case V16HF_FTYPE_V16HF_V16HF_V16HF_UQI_INT: case V32HF_FTYPE_V32HF_V32HF_V32HF_UHI_INT: case V32HF_FTYPE_V32HF_V32HF_V32HF_USI_INT: case V2DF_FTYPE_V8HF_V2DF_V2DF_UQI_INT: @@ -12552,7 +12555,9 @@ ix86_expand_round_builtin (const struct builtin_description *d, nargs_constant = 4; break; case V8DF_FTYPE_V8DF_V8DF_V8DI_INT_QI_INT: + case V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI_INT: case V16SF_FTYPE_V16SF_V16SF_V16SI_INT_HI_INT: + case V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI_INT: case V2DF_FTYPE_V2DF_V2DF_V2DI_INT_QI_INT: case V4SF_FTYPE_V4SF_V4SF_V4SI_INT_QI_INT: nargs = 6; diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md index d6ac88c0b6e..10640c6fef0 100644 --- a/gcc/config/i386/sse.md +++ b/gcc/config/i386/sse.md @@ -13856,7 +13856,7 @@ (match_operand: 3 "nonimmediate_operand" "") (match_operand:SI 4 "const_0_to_255_operand")] UNSPEC_FIXUPIMM))] - "TARGET_AVX512F" + "TARGET_AVX512F && " "vfixupimm\t{%4, %3, %2, %0|%0, %2, %3, %4}"; [(set_attr "prefix" "evex") (set_attr "mode" "")]) @@ -13872,7 +13872,7 @@ UNSPEC_FIXUPIMM) (match_dup 1) (match_operand: 5 "register_operand" "Yk")))] - "TARGET_AVX512F" + "TARGET_AVX512F && " "vfixupimm\t{%4, %3, %2, %0%{%5%}|%0%{%5%}, %2, %3, %4}"; [(set_attr "prefix" "evex") (set_attr "mode" "")]) diff --git a/gcc/testsuite/gcc.target/i386/avx-1.c b/gcc/testsuite/gcc.target/i386/avx-1.c index a6687d091d2..2e5c2f2d786 100644 --- a/gcc/testsuite/gcc.target/i386/avx-1.c +++ b/gcc/testsuite/gcc.target/i386/avx-1.c @@ -899,6 +899,16 @@ #define __builtin_ia32_divpd256_mask_round(A, B, C, D, E) __builtin_ia32_divpd256_mask_round(A, B, C, D, 8) #define __builtin_ia32_divph256_mask_round(A, B, C, D, E) __builtin_ia32_divph256_mask_round(A, B, C, D, 8) #define __builtin_ia32_divps256_mask_round(A, B, C, D, E) __builtin_ia32_divps256_mask_round(A, B, C, D, 8) +#define __builtin_ia32_vfcmaddcph256_round(A, B, C, D) __builtin_ia32_vfcmaddcph256_round(A, B, C, 8) +#define __builtin_ia32_vfcmaddcph256_mask_round(A, C, D, B, E) __builtin_ia32_vfcmaddcph256_mask_round(A, C, D, B, 8) +#define __builtin_ia32_vfcmaddcph256_mask3_round(A, C, D, B, E) __builtin_ia32_vfcmaddcph256_mask3_round(A, C, D, B, 8) +#define __builtin_ia32_vfcmaddcph256_maskz_round(B, C, D, A, E) __builtin_ia32_vfcmaddcph256_maskz_round(B, C, D, A, 8) +#define __builtin_ia32_vfcmulcph256_round(A, B, C) __builtin_ia32_vfcmulcph256_round(A, B, 8) +#define __builtin_ia32_vfcmulcph256_mask_round(A, B, C, D, E) __builtin_ia32_vfcmulcph256_mask_round(A, B, C, D, 8) +#define __builtin_ia32_fixupimmpd256_mask_round(A, B, C, I, E, F) __builtin_ia32_fixupimmpd256_mask_round(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmpd256_maskz_round(A, B, C, I, E, F) __builtin_ia32_fixupimmpd256_maskz_round(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps256_mask_round(A, B, C, I, E, F) __builtin_ia32_fixupimmps256_mask_round(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps256_maskz_round(A, B, C, I, E, F) __builtin_ia32_fixupimmps256_maskz_round(A, B, C, 1, E, 8) #include #include diff --git a/gcc/testsuite/gcc.target/i386/avx10_2-rounding-3.c b/gcc/testsuite/gcc.target/i386/avx10_2-rounding-3.c index c2313e94d72..16dcb274909 100644 --- a/gcc/testsuite/gcc.target/i386/avx10_2-rounding-3.c +++ b/gcc/testsuite/gcc.target/i386/avx10_2-rounding-3.c @@ -15,6 +15,18 @@ /* { dg-final { scan-assembler-times "vdivps\[ \\t\]+\[^\n\]*\{rn-sae\}\[^\{\n\]*%ymm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */ /* { dg-final { scan-assembler-times "vdivps\[ \\t\]+\[^\n\]*\{ru-sae\}\[^\{\n\]*%ymm\[0-9\]+\{%k\[1-7\]\}(?:\n|\[ \\t\]+#)" 1 } } */ /* { dg-final { scan-assembler-times "vdivps\[ \\t\]+\[^\n\]*\{rz-sae\}\[^\{\n\]*%ymm\[0-9\]+\{%k\[1-7\]\}\{z\}(?:\n|\[ \\t\]+#)" 1 } } */ +/* { dg-final { scan-assembler-times "vfcmaddcph\[ \\t\]+\{rn-sae\}\[^\{\n\]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */ +/* { dg-final { scan-assembler-times "vfcmaddcph\[ \\t\]+\{rn-sae\}\[^\{\n\]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\{%k\[0-9\]\}\[^\n\r]*(?:\n|\[ \\t\]+#)" 2 } } */ +/* { dg-final { scan-assembler-times "vfcmaddcph\[ \\t\]+\{rz-sae\}\[^\{\n\]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\{%k\[0-9\]\}\{z\}\[^\n\r]*(?:\n|\[ \\t\]+#)" 1 } } */ +/* { dg-final { scan-assembler-times "vfcmulcph\[ \\t\]+\{rn-sae\}\[^\{\n\]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+(?:\n|\[ \\t\]+#)" 2 } } */ +/* { dg-final { scan-assembler-times "vfcmulcph\[ \\t\]+\{rn-sae\}\[^\{\n\]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\{%k\[0-9\]\}\[^\n\r]*(?:\n|\[ \\t\]+#)" 2 } } */ +/* { dg-final { scan-assembler-times "vfcmulcph\[ \\t\]+\{rz-sae\}\[^\{\n\]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\[^\n\r]*%ymm\[0-9\]+\{%k\[0-9\]\}\{z\}\[^\n\r]*(?:\n|\[ \\t\]+#)" 2 } } */ +/* { dg-final { scan-assembler-times "vfixupimmpd\[ \\t\]+\[^\{\n\]*\{sae\}\[^\n\]*%ymm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */ +/* { dg-final { scan-assembler-times "vfixupimmpd\[ \\t\]+\[^\{\n\]*\{sae\}\[^\n\]*%ymm\[0-9\]+\{%k\[1-7\]\}(?:\n|\[ \\t\]+#)" 1 } } */ +/* { dg-final { scan-assembler-times "vfixupimmpd\[ \\t\]+\[^\{\n\]*\{sae\}\[^\n\]*%ymm\[0-9\]+\{%k\[1-7\]\}\{z\}(?:\n|\[ \\t\]+#)" 1 } } */ +/* { dg-final { scan-assembler-times "vfixupimmps\[ \\t\]+\[^\{\n\]*\{sae\}\[^\n\]*%ymm\[0-9\]+(?:\n|\[ \\t\]+#)" 1 } } */ +/* { dg-final { scan-assembler-times "vfixupimmps\[ \\t\]+\[^\{\n\]*\{sae\}\[^\n\]*%ymm\[0-9\]+\{%k\[1-7\]\}(?:\n|\[ \\t\]+#)" 1 } } */ +/* { dg-final { scan-assembler-times "vfixupimmps\[ \\t\]+\[^\{\n\]*\{sae\}\[^\n\]*%ymm\[0-9\]+\{%k\[1-7\]\}\{z\}(?:\n|\[ \\t\]+#)" 1 } } */ #include @@ -56,3 +68,40 @@ avx10_2_test_2 (void) x = _mm256_mask_div_round_ps (x, m16, x, x, _MM_FROUND_TO_POS_INF | _MM_FROUND_NO_EXC); x = _mm256_maskz_div_round_ps (m16, x, x, _MM_FROUND_TO_ZERO | _MM_FROUND_NO_EXC); } + +void extern +avx10_2_test_3 (void) +{ + xh = _mm256_fcmadd_round_pch (xh, xh, xh, 8); + xh = _mm256_mask_fcmadd_round_pch (xh, m8, xh, xh, 8); + xh = _mm256_mask3_fcmadd_round_pch (xh, xh, xh, m8, 8); + xh = _mm256_maskz_fcmadd_round_pch (m8, xh, xh, xh, 11); +} + +void extern +avx10_2_test_4 (void) +{ + xh = _mm256_fcmul_round_pch (xh, xh, 8); + xh = _mm256_mask_fcmul_round_pch (xh, m8, xh, xh, 8); + xh = _mm256_maskz_fcmul_round_pch (m8, xh, xh, 11); +} + +void extern +avx10_2_test_5 (void) +{ + xh = _mm256_cmul_round_pch (xh, xh, 8); + xh = _mm256_mask_cmul_round_pch (xh, m8, xh, xh, 8); + xh = _mm256_maskz_cmul_round_pch (m8, xh, xh, 11); +} + +void extern +avx10_2_test_6 (void) +{ + xd = _mm256_fixupimm_round_pd (xd, xd, xi, 3, _MM_FROUND_NO_EXC); + xd = _mm256_mask_fixupimm_round_pd (xd, m8, xd, xi, 3, _MM_FROUND_NO_EXC); + xd = _mm256_maskz_fixupimm_round_pd (m8, xd, xd, xi, 3, _MM_FROUND_NO_EXC); + + x = _mm256_fixupimm_round_ps (x, x, xi, 3, _MM_FROUND_NO_EXC); + x = _mm256_mask_fixupimm_round_ps (x, m8, x, xi, 3, _MM_FROUND_NO_EXC); + x = _mm256_maskz_fixupimm_round_ps (m8, x, x, xi, 3, _MM_FROUND_NO_EXC); +} diff --git a/gcc/testsuite/gcc.target/i386/sse-13.c b/gcc/testsuite/gcc.target/i386/sse-13.c index 3e99f8bd39a..b84791aeaae 100644 --- a/gcc/testsuite/gcc.target/i386/sse-13.c +++ b/gcc/testsuite/gcc.target/i386/sse-13.c @@ -906,5 +906,15 @@ #define __builtin_ia32_divpd256_mask_round(A, B, C, D, E) __builtin_ia32_divpd256_mask_round(A, B, C, D, 8) #define __builtin_ia32_divph256_mask_round(A, B, C, D, E) __builtin_ia32_divph256_mask_round(A, B, C, D, 8) #define __builtin_ia32_divps256_mask_round(A, B, C, D, E) __builtin_ia32_divps256_mask_round(A, B, C, D, 8) +#define __builtin_ia32_vfcmaddcph256_round(A, B, C, D) __builtin_ia32_vfcmaddcph256_round(A, B, C, 8) +#define __builtin_ia32_vfcmaddcph256_mask_round(A, C, D, B, E) __builtin_ia32_vfcmaddcph256_mask_round(A, C, D, B, 8) +#define __builtin_ia32_vfcmaddcph256_mask3_round(A, C, D, B, E) __builtin_ia32_vfcmaddcph256_mask3_round(A, C, D, B, 8) +#define __builtin_ia32_vfcmaddcph256_maskz_round(B, C, D, A, E) __builtin_ia32_vfcmaddcph256_maskz_round(B, C, D, A, 8) +#define __builtin_ia32_vfcmulcph256_round(A, B, C) __builtin_ia32_vfcmulcph256_round(A, B, 8) +#define __builtin_ia32_vfcmulcph256_mask_round(A, B, C, D, E) __builtin_ia32_vfcmulcph256_mask_round(A, B, C, D, 8) +#define __builtin_ia32_fixupimmpd256_mask_round(A, B, C, D, E, F) __builtin_ia32_fixupimmpd256_mask_round(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmpd256_maskz_round(A, B, C, D, E, F) __builtin_ia32_fixupimmpd256_maskz_round(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps256_mask_round(A, B, C, D, E, F) __builtin_ia32_fixupimmps256_mask_round(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps256_maskz_round(A, B, C, D, E, F) __builtin_ia32_fixupimmps256_maskz_round(A, B, C, 1, E, 8) #include diff --git a/gcc/testsuite/gcc.target/i386/sse-14.c b/gcc/testsuite/gcc.target/i386/sse-14.c index 21636e856a9..c9fb7ec8345 100644 --- a/gcc/testsuite/gcc.target/i386/sse-14.c +++ b/gcc/testsuite/gcc.target/i386/sse-14.c @@ -1119,6 +1119,7 @@ test_2 (_mm256_maskz_cvt_roundepi16_ph, __m256h, __mmask16, __m256i, 8) test_2 (_mm256_div_round_pd, __m256d, __m256d, __m256d, 9) test_2 (_mm256_div_round_ph, __m256h, __m256h, __m256h, 9) test_2 (_mm256_div_round_ps, __m256, __m256, __m256, 9) +test_2 (_mm256_fcmul_round_pch, __m256h, __m256h, __m256h, 8) test_2x (_mm256_cmp_round_pd_mask, __mmask8, __m256d, __m256d, 1, 8) test_2x (_mm256_cmp_round_ph_mask, __mmask16, __m256h, __m256h, 1, 8) test_2x (_mm256_cmp_round_ps_mask, __mmask8, __m256, __m256, 1, 8) @@ -1175,12 +1176,24 @@ test_3 (_mm256_mask_cvt_roundepi16_ph, __m256h, __m256h, __mmask16, __m256i, 8) test_3 (_mm256_maskz_div_round_pd, __m256d, __mmask8, __m256d, __m256d, 9) test_3 (_mm256_maskz_div_round_ph, __m256h, __mmask8, __m256h, __m256h, 9) test_3 (_mm256_maskz_div_round_ps, __m256, __mmask8, __m256, __m256, 9) +test_3 (_mm256_fcmadd_round_pch, __m256h, __m256h, __m256h, __m256h, 8) +test_3 (_mm256_maskz_fcmul_round_pch, __m256h, __mmask8, __m256h, __m256h, 8) test_3x (_mm256_mask_cmp_round_pd_mask, __mmask8, __mmask8, __m256d, __m256d, 1, 8) test_3x (_mm256_mask_cmp_round_ph_mask, __mmask16, __mmask16, __m256h, __m256h, 1, 8) test_3x (_mm256_mask_cmp_round_ps_mask, __mmask8, __mmask8, __m256, __m256, 1, 8) +test_3x (_mm256_fixupimm_round_pd, __m256d, __m256d, __m256d, __m256i, 3, 8) +test_3x (_mm256_fixupimm_round_ps, __m256, __m256, __m256, __m256i, 3, 8) test_4 (_mm256_mask_add_round_pd, __m256d, __m256d, __mmask8, __m256d, __m256d, 9) test_4 (_mm256_mask_add_round_ph, __m256h, __m256h, __mmask16, __m256h, __m256h, 8) test_4 (_mm256_mask_add_round_ps, __m256, __m256, __mmask8, __m256, __m256, 9) test_4 (_mm256_mask_div_round_pd, __m256d, __m256d, __mmask8, __m256d, __m256d, 9) test_4 (_mm256_mask_div_round_ph, __m256h, __m256h, __mmask8, __m256h, __m256h, 9) test_4 (_mm256_mask_div_round_ps, __m256, __m256, __mmask8, __m256, __m256, 9) +test_4 (_mm256_mask_fcmadd_round_pch, __m256h, __m256h, __mmask8, __m256h, __m256h, 9) +test_4 (_mm256_mask3_fcmadd_round_pch, __m256h, __m256h, __m256h, __m256h, __mmask8, 9) +test_4 (_mm256_maskz_fcmadd_round_pch, __m256h, __mmask8, __m256h, __m256h, __m256h, 9) +test_4 (_mm256_mask_fcmul_round_pch, __m256h, __m256h, __mmask8, __m256h, __m256h, 8) +test_4x (_mm256_maskz_fixupimm_round_pd, __m256d, __mmask8, __m256d, __m256d, __m256i, 3, 8) +test_4x (_mm256_maskz_fixupimm_round_ps, __m256, __mmask8, __m256, __m256, __m256i, 3, 8) +test_4x (_mm256_mask_fixupimm_round_pd, __m256d, __m256d, __mmask8, __m256d, __m256i, 3, 8) +test_4x (_mm256_mask_fixupimm_round_ps, __m256, __m256, __mmask8, __m256, __m256i, 3, 8) diff --git a/gcc/testsuite/gcc.target/i386/sse-22.c b/gcc/testsuite/gcc.target/i386/sse-22.c index 61aa49e8e0b..f9086ef03c3 100644 --- a/gcc/testsuite/gcc.target/i386/sse-22.c +++ b/gcc/testsuite/gcc.target/i386/sse-22.c @@ -1162,6 +1162,7 @@ test_2 (_mm256_maskz_cvt_roundepi16_ph, __m256h, __mmask16, __m256i, 8) test_2 (_mm256_div_round_pd, __m256d, __m256d, __m256d, 9) test_2 (_mm256_div_round_ph, __m256h, __m256h, __m256h, 9) test_2 (_mm256_div_round_ps, __m256, __m256, __m256, 9) +test_2 (_mm256_fcmul_round_pch, __m256h, __m256h, __m256h, 8) test_2x (_mm256_cmp_round_pd_mask, __mmask8, __m256d, __m256d, 1, 8) test_2x (_mm256_cmp_round_ph_mask, __mmask16, __m256h, __m256h, 1, 8) test_2x (_mm256_cmp_round_ps_mask, __mmask8, __m256, __m256, 1, 8) @@ -1218,12 +1219,24 @@ test_3 (_mm256_mask_cvt_roundepi16_ph, __m256h, __m256h, __mmask16, __m256i, 8) test_3 (_mm256_maskz_div_round_pd, __m256d, __mmask8, __m256d, __m256d, 9) test_3 (_mm256_maskz_div_round_ph, __m256h, __mmask8, __m256h, __m256h, 9) test_3 (_mm256_maskz_div_round_ps, __m256, __mmask8, __m256, __m256, 9) +test_3 (_mm256_fcmadd_round_pch, __m256h, __m256h, __m256h, __m256h, 8) +test_3 (_mm256_maskz_fcmul_round_pch, __m256h, __mmask8, __m256h, __m256h, 8) test_3x (_mm256_mask_cmp_round_pd_mask, __mmask8, __mmask8, __m256d, __m256d, 1, 8) test_3x (_mm256_mask_cmp_round_ph_mask, __mmask16, __mmask16, __m256h, __m256h, 1, 8) test_3x (_mm256_mask_cmp_round_ps_mask, __mmask8, __mmask8, __m256, __m256, 1, 8) +test_3x (_mm256_fixupimm_round_pd, __m256d, __m256d, __m256d, __m256i, 3, 8) +test_3x (_mm256_fixupimm_round_ps, __m256, __m256, __m256, __m256i, 3, 8) test_4 (_mm256_mask_add_round_pd, __m256d, __m256d, __mmask8, __m256d, __m256d, 9) test_4 (_mm256_mask_add_round_ph, __m256h, __m256h, __mmask16, __m256h, __m256h, 8) test_4 (_mm256_mask_add_round_ps, __m256, __m256, __mmask8, __m256, __m256, 9) test_4 (_mm256_mask_div_round_pd, __m256d, __m256d, __mmask8, __m256d, __m256d, 9) test_4 (_mm256_mask_div_round_ph, __m256h, __m256h, __mmask8, __m256h, __m256h, 9) test_4 (_mm256_mask_div_round_ps, __m256, __m256, __mmask8, __m256, __m256, 9) +test_4 (_mm256_mask_fcmadd_round_pch, __m256h, __m256h, __mmask8, __m256h, __m256h, 9) +test_4 (_mm256_mask3_fcmadd_round_pch, __m256h, __m256h, __m256h, __m256h, __mmask8, 9) +test_4 (_mm256_maskz_fcmadd_round_pch, __m256h, __mmask8, __m256h, __m256h, __m256h, 9) +test_4 (_mm256_mask_fcmul_round_pch, __m256h, __m256h, __mmask8, __m256h, __m256h, 8) +test_4x (_mm256_maskz_fixupimm_round_pd, __m256d, __mmask8, __m256d, __m256d, __m256i, 3, 8) +test_4x (_mm256_maskz_fixupimm_round_ps, __m256, __mmask8, __m256, __m256, __m256i, 3, 8) +test_4x (_mm256_mask_fixupimm_round_pd, __m256d, __m256d, __mmask8, __m256d, __m256i, 3, 8) +test_4x (_mm256_mask_fixupimm_round_ps, __m256, __m256, __mmask8, __m256, __m256i, 3, 8) diff --git a/gcc/testsuite/gcc.target/i386/sse-23.c b/gcc/testsuite/gcc.target/i386/sse-23.c index fe7850f23d2..9c42d969b62 100644 --- a/gcc/testsuite/gcc.target/i386/sse-23.c +++ b/gcc/testsuite/gcc.target/i386/sse-23.c @@ -881,6 +881,16 @@ #define __builtin_ia32_divpd256_mask_round(A, B, C, D, E) __builtin_ia32_divpd256_mask_round(A, B, C, D, 8) #define __builtin_ia32_divph256_mask_round(A, B, C, D, E) __builtin_ia32_divph256_mask_round(A, B, C, D, 8) #define __builtin_ia32_divps256_mask_round(A, B, C, D, E) __builtin_ia32_divps256_mask_round(A, B, C, D, 8) +#define __builtin_ia32_vfcmaddcph256_round(A, B, C, D) __builtin_ia32_vfcmaddcph256_round(A, B, C, 8) +#define __builtin_ia32_vfcmaddcph256_mask_round(A, C, D, B, E) __builtin_ia32_vfcmaddcph256_mask_round(A, C, D, B, 8) +#define __builtin_ia32_vfcmaddcph256_mask3_round(A, C, D, B, E) __builtin_ia32_vfcmaddcph256_mask3_round(A, C, D, B, 8) +#define __builtin_ia32_vfcmaddcph256_maskz_round(B, C, D, A, E) __builtin_ia32_vfcmaddcph256_maskz_round(B, C, D, A, 8) +#define __builtin_ia32_vfcmulcph256_round(A, B, C) __builtin_ia32_vfcmulcph256_round(A, B, 8) +#define __builtin_ia32_vfcmulcph256_mask_round(A, B, C, D, E) __builtin_ia32_vfcmulcph256_mask_round(A, B, C, D, 8) +#define __builtin_ia32_fixupimmpd256_mask_round(A, B, C, D, E, F) __builtin_ia32_fixupimmpd256_mask_round(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmpd256_maskz_round(A, B, C, D, E, F) __builtin_ia32_fixupimmpd256_maskz_round(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps256_mask_round(A, B, C, D, E, F) __builtin_ia32_fixupimmps256_mask_round(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps256_maskz_round(A, B, C, D, E, F) __builtin_ia32_fixupimmps256_maskz_round(A, B, C, 1, E, 8) #pragma GCC target ("sse4a,3dnow,avx,avx2,fma4,xop,aes,pclmul,popcnt,abm,lzcnt,bmi,bmi2,tbm,lwp,fsgsbase,rdrnd,f16c,fma,rtm,rdseed,prfchw,adx,fxsr,xsaveopt,sha,xsavec,xsaves,clflushopt,clwb,mwaitx,clzero,pku,sgx,rdpid,gfni,vpclmulqdq,pconfig,wbnoinvd,enqcmd,avx512vp2intersect,serialize,tsxldtrk,amx-tile,amx-int8,amx-bf16,kl,widekl,avxvnni,avxifma,avxvnniint8,avxneconvert,cmpccxadd,amx-fp16,prefetchi,raoint,amx-complex,avxvnniint16,sm3,sha512,sm4,avx10.2-512")