From patchwork Wed Jun 17 19:46:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Schmidt X-Patchwork-Id: 1311465 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=gcc.gnu.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=KicBcuhq; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49nFxz30Qdz9sR4 for ; Thu, 18 Jun 2020 05:49:11 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 7021139D6447; Wed, 17 Jun 2020 19:48:08 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 7021139D6447 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1592423288; bh=rI4jFQlOtQBcMiM1ZEmcOoj8MEMehWq+sjp4YDzD/7c=; h=To:Subject:Date:In-Reply-To:References:In-Reply-To:References: List-Id:List-Unsubscribe:List-Archive:List-Post:List-Help: List-Subscribe:From:Reply-To:Cc:From; b=KicBcuhqJ+ZIv/4bNRP7U3UijtKg4ORQd/p8CRYP+6PiCYwiHfUgStkaAFW+Noaiq 4IlwsvpPTBo1KbBWAimjmyV1zdBXbvKDGZE5HIxJPgC5WN2m6BgzQ/YUz0FVln30n4 8izbJODiZT+86N3uoet1ii60K12+acR01q9fq22A= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by sourceware.org (Postfix) with ESMTPS id 1D78F3997824 for ; Wed, 17 Jun 2020 19:48:05 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 1D78F3997824 Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 05HJWFIh090381; Wed, 17 Jun 2020 15:48:04 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 31q6hka75j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 17 Jun 2020 15:48:04 -0400 Received: from m0098394.ppops.net (m0098394.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 05HJWI1E090758; Wed, 17 Jun 2020 15:48:03 -0400 Received: from ppma01wdc.us.ibm.com (fd.55.37a9.ip4.static.sl-reverse.com [169.55.85.253]) by mx0a-001b2d01.pphosted.com with ESMTP id 31q6hka757-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 17 Jun 2020 15:48:03 -0400 Received: from pps.filterd (ppma01wdc.us.ibm.com [127.0.0.1]) by ppma01wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 05HJjNge025792; Wed, 17 Jun 2020 19:48:02 GMT Received: from b01cxnp22036.gho.pok.ibm.com (b01cxnp22036.gho.pok.ibm.com [9.57.198.26]) by ppma01wdc.us.ibm.com with ESMTP id 31q6bd5x5g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 17 Jun 2020 19:48:02 +0000 Received: from b01ledav001.gho.pok.ibm.com (b01ledav001.gho.pok.ibm.com [9.57.199.106]) by b01cxnp22036.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 05HJm2Y212714580 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 17 Jun 2020 19:48:02 GMT Received: from b01ledav001.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1D03E2805C; Wed, 17 Jun 2020 19:48:02 +0000 (GMT) Received: from b01ledav001.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id ED30928059; Wed, 17 Jun 2020 19:48:01 +0000 (GMT) Received: from localhost (unknown [9.40.194.84]) by b01ledav001.gho.pok.ibm.com (Postfix) with ESMTP; Wed, 17 Jun 2020 19:48:01 +0000 (GMT) To: gcc-patches@gcc.gnu.org Subject: [PATCH 25/28] rs6000: Add MASK_P8_VECTOR builtins Date: Wed, 17 Jun 2020 14:46:48 -0500 Message-Id: <55f98b306c96b567953b5e0f42bad04c9694221c.1592419212.git.wschmidt@linux.ibm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: In-Reply-To: References: X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.216, 18.0.687 definitions=2020-06-17_10:2020-06-17, 2020-06-17 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 lowpriorityscore=0 suspectscore=1 spamscore=0 phishscore=0 impostorscore=0 cotscore=-2147483648 priorityscore=1501 clxscore=1015 malwarescore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006170144 X-Spam-Status: No, score=-11.5 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Bill Schmidt via Gcc-patches From: Bill Schmidt Reply-To: Bill Schmidt Cc: dje.gcc@gmail.com, segher@kernel.crashing.org, willschm@linux.ibm.com Errors-To: gcc-patches-bounces@gcc.gnu.org Sender: "Gcc-patches" 2020-06-17 Bill Schmidt * config/rs6000/rs6000-builtin-new.def: Add MASK_P8_VECTOR builtins. --- gcc/config/rs6000/rs6000-builtin-new.def | 417 +++++++++++++++++++++++ 1 file changed, 417 insertions(+) diff --git a/gcc/config/rs6000/rs6000-builtin-new.def b/gcc/config/rs6000/rs6000-builtin-new.def index 387ea7aff32..e980e749d1d 100644 --- a/gcc/config/rs6000/rs6000-builtin-new.def +++ b/gcc/config/rs6000/rs6000-builtin-new.def @@ -1976,3 +1976,420 @@ DIVDEU diveu_di {} +; Power8 vector built-ins. +[MASK_P8_VECTOR] + const vsll __builtin_altivec_abs_v2di (vsll); + ABS_V2DI absv2di2 {} + + const vsc __builtin_altivec_eqv_v16qi (vsc, vsc); + EQV_V16QI eqvv16qi3 {} + + const vuc __builtin_altivec_eqv_v16qi_uns (vuc, vuc); + EQV_V16QI_UNS eqvv16qi3 {} + + const vsq __builtin_altivec_eqv_v1ti (vsq, vsq); + EQV_V1TI eqvv1ti3 {} + + const vuq __builtin_altivec_eqv_v1ti_uns (vuq, vuq); + EQV_V1TI_UNS eqvv1ti3 {} + + const vd __builtin_altivec_eqv_v2df (vd, vd); + EQV_V2DF eqvv2df3 {} + + const vsll __builtin_altivec_eqv_v2di (vsll, vsll); + EQV_V2DI eqvv2di3 {} + + const vull __builtin_altivec_eqv_v2di_uns (vull, vull); + EQV_V2DI_UNS eqvv2di3 {} + + const vf __builtin_altivec_eqv_v4sf (vf, vf); + EQV_V4SF eqvv4sf3 {} + + const vsi __builtin_altivec_eqv_v4si (vsi, vsi); + EQV_V4SI eqvv4si3 {} + + const vui __builtin_altivec_eqv_v4si_uns (vui, vui); + EQV_V4SI_UNS eqvv4si3 {} + + const vss __builtin_altivec_eqv_v8hi (vss, vss); + EQV_V8HI eqvv8hi3 {} + + const vus __builtin_altivec_eqv_v8hi_uns (vus, vus); + EQV_V8HI_UNS eqvv8hi3 {} + + const vsc __builtin_altivec_nand_v16qi (vsc, vsc); + NAND_V16QI nandv16qi3 {} + + const vuc __builtin_altivec_nand_v16qi_uns (vuc, vuc); + NAND_V16QI_UNS nandv16qi3 {} + + const vsq __builtin_altivec_nand_v1ti (vsq, vsq); + NAND_V1TI nandv1ti3 {} + + const vuq __builtin_altivec_nand_v1ti_uns (vuq, vuq); + NAND_V1TI_UNS nandv1ti3 {} + + const vd __builtin_altivec_nand_v2df (vd, vd); + NAND_V2DF nandv2df3 {} + + const vsll __builtin_altivec_nand_v2di (vsll, vsll); + NAND_V2DI nandv2di3 {} + + const vull __builtin_altivec_nand_v2di_uns (vull, vull); + NAND_V2DI_UNS nandv2di3 {} + + const vf __builtin_altivec_nand_v4sf (vf, vf); + NAND_V4SF nandv4sf3 {} + + const vsi __builtin_altivec_nand_v4si (vsi, vsi); + NAND_V4SI nandv4si3 {} + + const vui __builtin_altivec_nand_v4si_uns (vui, vui); + NAND_V4SI_UNS nandv4si3 {} + + const vss __builtin_altivec_nand_v8hi (vss, vss); + NAND_V8HI nandv8hi3 {} + + const vus __builtin_altivec_nand_v8hi_uns (vus, vus); + NAND_V8HI_UNS nandv8hi3 {} + + const vsc __builtin_altivec_neg_v16qi (vsc); + NEG_V16QI negv16qi2 {} + + const vd __builtin_altivec_neg_v2df (vd); + NEG_V2DF negv2df2 {} + + const vsll __builtin_altivec_neg_v2di (vsll); + NEG_V2DI negv2di2 {} + + const vf __builtin_altivec_neg_v4sf (vf); + NEG_V4SF negv4sf2 {} + + const vsi __builtin_altivec_neg_v4si (vsi); + NEG_V4SI negv4si2 {} + + const vss __builtin_altivec_neg_v8hi (vss); + NEG_V8HI negv8hi2 {} + + const vsc __builtin_altivec_orc_v16qi (vsc, vsc); + ORC_V16QI orcv16qi3 {} + + const vuc __builtin_altivec_orc_v16qi_uns (vuc, vuc); + ORC_V16QI_UNS orcv16qi3 {} + + const vsq __builtin_altivec_orc_v1ti (vsq, vsq); + ORC_V1TI orcv1ti3 {} + + const vuq __builtin_altivec_orc_v1ti_uns (vuq, vuq); + ORC_V1TI_UNS orcv1ti3 {} + + const vd __builtin_altivec_orc_v2df (vd, vd); + ORC_V2DF orcv2df3 {} + + const vsll __builtin_altivec_orc_v2di (vsll, vsll); + ORC_V2DI orcv2di3 {} + + const vull __builtin_altivec_orc_v2di_uns (vull, vull); + ORC_V2DI_UNS orcv2di3 {} + + const vf __builtin_altivec_orc_v4sf (vf, vf); + ORC_V4SF orcv4sf3 {} + + const vsi __builtin_altivec_orc_v4si (vsi, vsi); + ORC_V4SI orcv4si3 {} + + const vui __builtin_altivec_orc_v4si_uns (vui, vui); + ORC_V4SI_UNS orcv4si3 {} + + const vss __builtin_altivec_orc_v8hi (vss, vss); + ORC_V8HI orcv8hi3 {} + + const vus __builtin_altivec_orc_v8hi_uns (vus, vus); + ORC_V8HI_UNS orcv8hi3 {} + + const vsc __builtin_altivec_vclzb (vsc); + VCLZB clzv16qi2 {} + + const vsll __builtin_altivec_vclzd (vsll); + VCLZD clzv2di2 {} + + const vss __builtin_altivec_vclzh (vss); + VCLZH clzv8hi2 {} + + const vsi __builtin_altivec_vclzw (vsi); + VCLZW clzv4si2 {} + + const vsc __builtin_altivec_vgbbd (vsc); + VGBBD p8v_vgbbd {} + + const vsq __builtin_altivec_vaddcuq (vsq, vsq); + VADDCUQ altivec_vaddcuq {} + + const vsq __builtin_altivec_vaddecuq (vsq, vsq, vsq); + VADDECUQ altivec_vaddecuq {} + + const vuq __builtin_altivec_vaddeuqm (vuq, vuq, vuq); + VADDEUQM altivec_vaddeuqm {} + + const vsll __builtin_altivec_vaddudm (vsll, vsll); + VADDUDM addv2di3 {} + + const vsq __builtin_altivec_vadduqm (vsq, vsq); + VADDUQM altivec_vadduqm {} + + const vsll __builtin_altivec_vbpermq (vop, vsc); + VBPERMQ altivec_vbpermq {} + + const vuc __builtin_altivec_vbpermq2 (vuc, vuc); + VBPERMQ2 altivec_vbpermq2 {} + + const vbll __builtin_altivec_vcmpequd (vsll, vsll); + VCMPEQUD vector_eqv2di {} + + const int __builtin_altivec_vcmpequd_p (int, vsll, vsll); + VCMPEQUD_P vector_eq_v2di_p {pred} + + const vbll __builtin_altivec_vcmpgtsd (vsll, vsll); + VCMPGTSD vector_gtv2di {} + + const int __builtin_altivec_vcmpgtsd_p (int, vsll, vsll); + VCMPGTSD_P vector_gt_v2di_p {pred} + + const vbll __builtin_altivec_vcmpgtud (vull, vull); + VCMPGTUD vector_gtuv2di {} + + const int __builtin_altivec_vcmpgtud_p (vull, vull); + VCMPGTUD_P vector_gtu_v2di_p {pred} + + const vsll __builtin_altivec_vmaxsd (vsll, vsll); + VMAXSD smaxv2di3 {} + + const vull __builtin_altivec_vmaxud (vull, vull); + VMAXUD umaxv2di3 {} + + const vsll __builtin_altivec_vminsd (vsll, vsll); + VMINSD sminv2di3 {} + + const vull __builtin_altivec_vminud (vull, vull); + VMINUD uminv2di3 {} + + const vd __builtin_altivec_vmrgew_v2df (vd, vd); + VMRGEW_V2DF p8_vmrgew_v2df {} + + const vsll __builtin_altivec_vmrgew_v2di (vsll, vsll); + VMRGEW_V2DI p8_vmrgew_v2di {} + + const vf __builtin_altivec_vmrgew_v4sf (vf, vf); + VMRGEW_V4SF p8_vmrgew_v4sf {} + + const vsi __builtin_altivec_vmrgew_v4si (vsi, vsi); + VMRGEW_V4SI p8_vmrgew_v4si {} + + const vd __builtin_altivec_vmrgow_v2df (vd, vd); + VMRGOW_V2DF p8_vmrgow_v2df {} + + const vsll __builtin_altivec_vmrgow_v2di (vsll, vsll); + VMRGOW_V2DI p8_vmrgow_v2di {} + + const vf __builtin_altivec_vmrgow_v4sf (vf, vf); + VMRGOW_V4SF p8_vmrgow_v4sf {} + + const vsi __builtin_altivec_vmrgow_v4si (vsi, vsi); + VMRGOW_V4SI p8_vmrgow_v4si {} + + const vsll __builtin_altivec_vmulesw (vsi, vsi); + VMULESW vec_widen_smult_even_v4si {} + + const vull __builtin_altivec_vmuleuw (vui, vui); + VMULEUW vec_widen_umult_even_v4si {} + + const vsll __builtin_altivec_vmulosw (vsi, vsi); + VMULOSW vec_widen_smult_odd_v4si {} + + const vull __builtin_altivec_vmulouw (vui, vui); + VMULOUW vec_widen_umult_odd_v4si {} + + const vsc __builtin_altivec_vpermxor (vsc, vsc, vsc); + VPERMXOR altivec_vpermxor {} + + const vsi __builtin_altivec_vpksdss (vsll, vsll); + VPKSDSS altivec_vpksdss {} + + const vui __builtin_altivec_vpksdus (vsll, vsll); + VPKSDUS altivec_vpksdus {} + + const vui __builtin_altivec_vpkudum (vull, vull); + VPKUDUM altivec_vpkudum {} + + const vui __builtin_altivec_vpkudus (vull, vull); + VPKUDUS altivec_vpkudus {} + +; #### Following are duplicates of __builtin_crypto_vpmsum*. This +; can't have ever worked properly! +; +; const vus __builtin_altivec_vpmsumb (vuc, vuc); +; VPMSUMB crypto_vpmsumb {} +; +; const vuq __builtin_altivec_vpmsumd (vull, vull); +; VPMSUMD crypto_vpmsumd {} +; +; const vui __builtin_altivec_vpmsumh (vus, vus); +; VPMSUMH crypto_vpmsumh {} +; +; const vull __builtin_altivec_vpmsumw (vui, vui); +; VPMSUMW crypto_vpmsumw {} + + const vuc __builtin_altivec_vpopcntb (vsc); + VPOPCNTB popcountv16qi2 {} + + const vull __builtin_altivec_vpopcntd (vsll); + VPOPCNTD popcountv2di2 {} + + const vus __builtin_altivec_vpopcnth (vss); + VPOPCNTH popcountv8hi2 {} + + const vuc __builtin_altivec_vpopcntub (vuc); + VPOPCNTUB popcountv16qi2 {} + + const vull __builtin_altivec_vpopcntud (vull); + VPOPCNTUD popcountv2di2 {} + + const vus __builtin_altivec_vpopcntuh (vus); + VPOPCNTUH popcountv8hi2 {} + + const vui __builtin_altivec_vpopcntuw (vui); + VPOPCNTUW popcountv4si2 {} + + const vui __builtin_altivec_vpopcntw (vsi); + VPOPCNTW popcountv4si2 {} + + const vsll __builtin_altivec_vrld (vsll, vull); + VRLD vrotlv2di3 {} + + const vsll __builtin_altivec_vsld (vsll, vull); + VSLD vashlv2di3 {} + + const vsll __builtin_altivec_vsrad (vsll, vull); + VSRAD vashrv2di3 {} + + const vsll __builtin_altivec_vsrd (vsll, vull); + VSRD vlshrv2di3 {} + + const vuq __builtin_altivec_vsubcuq (vuq, vuq); + VSUBCUQ altivec_vsubcuq {} + + const vsq __builtin_altivec_vsubecuq (vsq, vsq, vsq); + VSUBECUQ altivec_vsubecuq {} + + const vuq __builtin_altivec_vsubeuqm (vuq, vuq, vuq); + VSUBEUQM altivec_vsubeuqm {} + + const vull __builtin_altivec_vsubudm (vull, vull); + VSUBUDM subv2di3 {} + + const vuq __builtin_altivec_vsubuqm (vuq, vuq); + VSUBUQM altivec_vsubuqm {} + + const vsll __builtin_altivec_vupkhsw (vsi); + VUPKHSW altivec_vupkhsw {} + + const vsll __builtin_altivec_vupklsw (vsi); + VUPKLSW altivec_vupklsw {} + + const vsq __builtin_bcdadd (vsq, vsq, const int<1>); + BCDADD bcdadd {} + + const unsigned int __builtin_bcdadd_eq (vsq, vsq, const int<1>); + BCDADD_EQ bcdadd_eq {} + + const unsigned int __builtin_bcdadd_gt (vsq, vsq, const int<1>); + BCDADD_GT bcdadd_gt {} + + const unsigned int __builtin_bcdadd_lt (vsq, vsq, const int<1>); + BCDADD_LT bcdadd_lt {} + + const unsigned int __builtin_bcdadd_ov (vsq, vsq, const int<1>); + BCDADD_OV bcdadd_unordered {} + + const vsq __builtin_bcdsub (vsq, vsq, const int<1>); + BCDSUB bcdsub {} + + const unsigned int __builtin_bcdsub_eq (vsq, vsq, const int<1>); + BCDSUB_EQ bcdsub_eq {} + + const unsigned int __builtin_bcdsub_gt (vsq, vsq, const int<1>); + BCDSUB_GT bcdsub_gt {} + + const unsigned int __builtin_bcdsub_lt (vsq, vsq, const int<1>); + BCDSUB_LT bcdsub_lt {} + + const unsigned int __builtin_bcdsub_ov (vsq, vsq, const int<1>); + BCDSUB_OV bcdsub_unordered {} + + const vuc __builtin_crypto_vpermxor_v16qi (vuc, vuc, vuc); + VPERMXOR_V16QI crypto_vpermxor_v16qi {} + + const vull __builtin_crypto_vpermxor_v2di (vull, vull, vull); + VPERMXOR_V2DI crypto_vpermxor_v2di {} + + const vui __builtin_crypto_vpermxor_v4si (vui, vui, vui); + VPERMXOR_V4SI crypto_vpermxor_v4si {} + + const vus __builtin_crypto_vpermxor_v8hi (vus, vus, vus); + VPERMXOR_V8HI crypto_vpermxor_v8hi {} + + const vus __builtin_crypto_vpmsumb (vuc, vuc); + VPMSUMB crypto_vpmsumb {} + + const vuq __builtin_crypto_vpmsumd (vull, vull); + VPMSUMD crypto_vpmsumd {} + + const vui __builtin_crypto_vpmsumh (vus, vus); + VPMSUMH crypto_vpmsumh {} + + const vull __builtin_crypto_vpmsumw (vui, vui); + VPMSUMW crypto_vpmsumw {} + + const vf __builtin_vsx_float2_v2df (vd, vd); + FLOAT2_V2DF float2_v2df {} + + const vf __builtin_vsx_float2_v2di (vsll, vsll); + FLOAT2_V2DI float2_v2di {} + + const vsc __builtin_vsx_revb_v16qi (vsc); + REVB_V16QI revb_v16qi {} + + const vsq __builtin_vsx_revb_v1ti (vsq); + REVB_V1TI revb_v1ti {} + + const vd __builtin_vsx_revb_v2df (vd); + REVB_V2DF revb_v2df {} + + const vsll __builtin_vsx_revb_v2di (vsll); + REVB_V2DI revb_v2di {} + + const vf __builtin_vsx_revb_v4sf (vf); + REVB_V4SF revb_v4sf {} + + const vsi __builtin_vsx_revb_v4si (vsi); + REVB_V4SI revb_v4si {} + + const vss __builtin_vsx_revb_v8hi (vss); + REVB_V8HI revb_v8hi {} + + const vf __builtin_vsx_uns_float2_v2di (vull, vull); + UNS_FLOAT2_V2DI uns_float2_v2di {} + + const vsi __builtin_vsx_vsigned2_v2df (vd, vd); + VEC_VSIGNED2_V2DF vsigned2_v2df {} + + const vui __builtin_vsx_vunsigned2_v2df (vd, vd); + VEC_VUNSIGNED2_V2DF vunsigned2_v2df {} + + const vf __builtin_vsx_xscvdpspn (double); + XSCVDPSPN vsx_xscvdpspn {} + + const double __builtin_vsx_xscvspdpn (vf); + XSCVSPDPN vsx_xscvspdpn {} + +