From patchwork Tue Dec 4 14:59:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul A. Clarke" X-Patchwork-Id: 1007687 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-491625-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=us.ibm.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="G+no0hlN"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 438Q4Y4ff2z9s55 for ; Wed, 5 Dec 2018 01:59:25 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:date:mime-version:message-id:content-type :content-transfer-encoding; q=dns; s=default; b=gcydVSMJl68pzL0v NlTpeeS4IDt9MzmIvoB1+gZxg6YQJs3pEckgKsEc2p/zZjfhOvuzG3MJh6L3dZDP vdhSK7/4rROEvGO8KjMMZoYNkk6YNo37/5/B/ZUhhVl3ShkP8nNZjOGriMcGPMeN 19aBWkGL11vwjtDAYH08KZfgMeI= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:date:mime-version:message-id:content-type :content-transfer-encoding; s=default; bh=Pr1Let4zqlBF9gwg1xfTED O/V0Y=; b=G+no0hlN9+aTdTdBliDVLLl1QGI3WIWrcwQeIK0Wu4KjTOqATpaOTK nT46R4AkzPv5is//LB/jD67LopOAxb/+K1rXeYIi34tpkQDrTJdw7jlGsaX/JFv0 arTj4GTWc8Of/DDGsZEFXQy3k08Mtmpbm2JvVlIFUtJSGjIO6Hx6s= Received: (qmail 2720 invoked by alias); 4 Dec 2018 14:59:17 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 2707 invoked by uid 89); 4 Dec 2018 14:59:17 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-21.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 spammy=Specification X-HELO: mx0a-001b2d01.pphosted.com Received: from mx0b-001b2d01.pphosted.com (HELO mx0a-001b2d01.pphosted.com) (148.163.158.5) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 04 Dec 2018 14:59:11 +0000 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wB4Ex3rU089444 for ; Tue, 4 Dec 2018 09:59:09 -0500 Received: from e35.co.us.ibm.com (e35.co.us.ibm.com [32.97.110.153]) by mx0b-001b2d01.pphosted.com with ESMTP id 2p5tqa4agc-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 04 Dec 2018 09:59:08 -0500 Received: from localhost by e35.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 4 Dec 2018 14:59:06 -0000 Received: from b03cxnp07029.gho.boulder.ibm.com (9.17.130.16) by e35.co.us.ibm.com (192.168.1.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 4 Dec 2018 14:59:05 -0000 Received: from b03ledav005.gho.boulder.ibm.com (b03ledav005.gho.boulder.ibm.com [9.17.130.236]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wB4Ex4sw19464356 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 4 Dec 2018 14:59:04 GMT Received: from b03ledav005.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CB5C8BE054; Tue, 4 Dec 2018 14:59:04 +0000 (GMT) Received: from b03ledav005.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 55861BE051; Tue, 4 Dec 2018 14:59:04 +0000 (GMT) Received: from oc3272150783.ibm.com (unknown [9.41.247.5]) by b03ledav005.gho.boulder.ibm.com (Postfix) with ESMTPS; Tue, 4 Dec 2018 14:59:04 +0000 (GMT) To: gcc-patches@gcc.gnu.org, Segher Boessenkool From: Paul Clarke Subject: [PATCH 1/3][rs6000] x86-compat vector intrinsics fixes for BE, 32bit Date: Tue, 4 Dec 2018 08:59:03 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 x-cbid: 18120414-0012-0000-0000-000016E4541E X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00010170; HX=3.00000242; KW=3.00000007; PH=3.00000004; SC=3.00000270; SDB=6.01126889; UDB=6.00585279; IPR=6.00907023; MB=3.00024442; MTD=3.00000008; XFM=3.00000015; UTC=2018-12-04 14:59:06 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18120414-0013-0000-0000-000055560692 Message-Id: Fix general endian and 32-bit mode issues found in the compatibility implementations of the x86 vector intrinsics when running the associated test suite tests. (The tests had been inadvertently made to PASS without actually running the test code. A later patch fixes this issue.) In a few cases, the opportunity was taken to update the vector API used in the implementations to the preferred functions from the OpenPOWER 64-Bit ELF V2 ABI Specification. [gcc] 2018-12-03 Paul A. Clarke PR target/88316 * config/rs6000/mmintrin.h (_mm_unpackhi_pi8): Fix for big-endian. (_mm_unpacklo_pi8): Likewise. (_mm_mulhi_pi16): Likewise. (_mm_packs_pi16): Fix for big-endian. Use preferred API. (_mm_packs_pi32): Likewise. (_mm_packs_pu16): Likewise. * config/rs6000/xmmintrin.h (_mm_cvtss_si32): Fix for big-endian. (_mm_cvtss_si64): Likewise. (_mm_cvtpi32x2_ps): Likewise. (_mm_shuffle_ps): Likewise. (_mm_movemask_pi8): Likewise. (_mm_mulhi_pu16): Likewise. (_mm_sad_pu8): Likewise. (_mm_sad_pu8): Likewise. (_mm_cvtpu16_ps): Fix for big-endian. Use preferred API. (_mm_cvtpu8_ps): Likewise. * config/rs6000/emmintrin.h (_mm_movemask_pd): Fix for big-endian. (_mm_mul_epu32): Likewise. (_mm_bsrli_si128): Likewise. (_mm_movemask_epi8): Likewise. (_mm_shufflehi_epi16): Likewise. (_mm_shufflelo_epi16): Likewise. (_mm_shuffle_epi32): Likewise. * config/rs6000/pmmintrin.h (_mm_hadd_ps): Fix for big-endian. (_mm_sub_ps): Likewise. * config/rs6000/mmintrin.h (_mm_cmpeq_pi8): Fix for 32-bit mode. --- PC Index: gcc/config/rs6000/emmintrin.h =================================================================== diff --git a/trunk/gcc/config/rs6000/emmintrin.h b/trunk/gcc/config/rs6000/emmintrin.h --- a/trunk/gcc/config/rs6000/emmintrin.h (revision 266157) +++ b/trunk/gcc/config/rs6000/emmintrin.h (working copy) @@ -1237,7 +1237,7 @@ _mm_movemask_pd (__m128d __A) #ifdef __LITTLE_ENDIAN__ 0x80800040, 0x80808080, 0x80808080, 0x80808080 #elif __BIG_ENDIAN__ - 0x80808080, 0x80808080, 0x80808080, 0x80800040 + 0x80808080, 0x80808080, 0x80808080, 0x80804000 #endif }; @@ -1483,12 +1483,8 @@ _mm_mul_epu32 (__m128i __A, __m128i __B) #endif return (__m128i) result; #else -#ifdef __LITTLE_ENDIAN__ return (__m128i) vec_mule ((__v4su)__A, (__v4su)__B); -#elif __BIG_ENDIAN__ - return (__m128i) vec_mulo ((__v4su)__A, (__v4su)__B); #endif -#endif } extern __inline __m128i __attribute__((__gnu_inline__, __always_inline__, __artificial__)) @@ -1612,7 +1608,8 @@ _mm_bsrli_si128 (__m128i __A, const int __N) const __v16qu zeros = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; if (__N < 16) - if (__builtin_constant_p(__N)) + if (__builtin_constant_p(__N) && + __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__) /* Would like to use Vector Shift Left Double by Octet Immediate here to use the immediate form and avoid load of __N * 8 value into a separate VR. */ @@ -1620,7 +1617,11 @@ _mm_bsrli_si128 (__m128i __A, const int __N) else { __v16qu shift = vec_splats((unsigned char)(__N*8)); +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ result = vec_sro ((__v16qu)__A, shift); +#else + result = vec_slo ((__v16qu)__A, shift); +#endif } else result = zeros; @@ -2026,13 +2027,8 @@ _mm_movemask_epi8 (__m128i __A) __vector unsigned long long result; static const __vector unsigned char perm_mask = { -#ifdef __LITTLE_ENDIAN__ 0x78, 0x70, 0x68, 0x60, 0x58, 0x50, 0x48, 0x40, 0x38, 0x30, 0x28, 0x20, 0x18, 0x10, 0x08, 0x00 -#elif __BIG_ENDIAN__ - 0x00, 0x08, 0x10, 0x18, 0x20, 0x28, 0x30, 0x38, - 0x40, 0x48, 0x50, 0x58, 0x60, 0x68, 0x70, 0x78 -#endif }; result = ((__vector unsigned long long) @@ -2078,34 +2074,23 @@ _mm_shufflehi_epi16 (__m128i __A, const int __mask #ifdef __LITTLE_ENDIAN__ 0x0908, 0x0B0A, 0x0D0C, 0x0F0E #elif __BIG_ENDIAN__ - 0x0607, 0x0405, 0x0203, 0x0001 + 0x0809, 0x0A0B, 0x0C0D, 0x0E0F #endif }; __v2du pmask = #ifdef __LITTLE_ENDIAN__ - { 0x1716151413121110UL, 0x1f1e1d1c1b1a1918UL}; + { 0x1716151413121110UL, 0UL}; #elif __BIG_ENDIAN__ - { 0x1011121314151617UL, 0x18191a1b1c1d1e1fUL}; + { 0x1011121314151617UL, 0UL}; #endif __m64_union t; __v2du a, r; -#ifdef __LITTLE_ENDIAN__ t.as_short[0] = permute_selectors[element_selector_98]; t.as_short[1] = permute_selectors[element_selector_BA]; t.as_short[2] = permute_selectors[element_selector_DC]; t.as_short[3] = permute_selectors[element_selector_FE]; -#elif __BIG_ENDIAN__ - t.as_short[3] = permute_selectors[element_selector_98]; - t.as_short[2] = permute_selectors[element_selector_BA]; - t.as_short[1] = permute_selectors[element_selector_DC]; - t.as_short[0] = permute_selectors[element_selector_FE]; -#endif -#ifdef __LITTLE_ENDIAN__ pmask[1] = t.as_m64; -#elif __BIG_ENDIAN__ - pmask[0] = t.as_m64; -#endif a = (__v2du)__A; r = vec_perm (a, a, (__vector unsigned char)pmask); return (__m128i) r; @@ -2122,30 +2107,23 @@ _mm_shufflelo_epi16 (__m128i __A, const int __mask { #ifdef __LITTLE_ENDIAN__ 0x0100, 0x0302, 0x0504, 0x0706 -#elif __BIG_ENDIAN__ - 0x0e0f, 0x0c0d, 0x0a0b, 0x0809 +#else + 0x0001, 0x0203, 0x0405, 0x0607 #endif }; - __v2du pmask = { 0x1011121314151617UL, 0x1f1e1d1c1b1a1918UL}; + __v2du pmask = +#ifdef __LITTLE_ENDIAN__ + { 0UL, 0x1f1e1d1c1b1a1918UL}; +#else + { 0UL, 0x18191a1b1c1d1e1fUL}; +#endif __m64_union t; __v2du a, r; - -#ifdef __LITTLE_ENDIAN__ t.as_short[0] = permute_selectors[element_selector_10]; t.as_short[1] = permute_selectors[element_selector_32]; t.as_short[2] = permute_selectors[element_selector_54]; t.as_short[3] = permute_selectors[element_selector_76]; -#elif __BIG_ENDIAN__ - t.as_short[3] = permute_selectors[element_selector_10]; - t.as_short[2] = permute_selectors[element_selector_32]; - t.as_short[1] = permute_selectors[element_selector_54]; - t.as_short[0] = permute_selectors[element_selector_76]; -#endif -#ifdef __LITTLE_ENDIAN__ pmask[0] = t.as_m64; -#elif __BIG_ENDIAN__ - pmask[1] = t.as_m64; -#endif a = (__v2du)__A; r = vec_perm (a, a, (__vector unsigned char)pmask); return (__m128i) r; @@ -2163,22 +2141,15 @@ _mm_shuffle_epi32 (__m128i __A, const int __mask) #ifdef __LITTLE_ENDIAN__ 0x03020100, 0x07060504, 0x0B0A0908, 0x0F0E0D0C #elif __BIG_ENDIAN__ - 0x0C0D0E0F, 0x08090A0B, 0x04050607, 0x00010203 + 0x00010203, 0x04050607, 0x08090A0B, 0x0C0D0E0F #endif }; __v4su t; -#ifdef __LITTLE_ENDIAN__ t[0] = permute_selectors[element_selector_10]; t[1] = permute_selectors[element_selector_32]; t[2] = permute_selectors[element_selector_54] + 0x10101010; t[3] = permute_selectors[element_selector_76] + 0x10101010; -#elif __BIG_ENDIAN__ - t[3] = permute_selectors[element_selector_10] + 0x10101010; - t[2] = permute_selectors[element_selector_32] + 0x10101010; - t[1] = permute_selectors[element_selector_54]; - t[0] = permute_selectors[element_selector_76]; -#endif return (__m128i)vec_perm ((__v4si) __A, (__v4si)__A, (__vector unsigned char)t); } Index: gcc/config/rs6000/mmintrin.h =================================================================== diff --git a/trunk/gcc/config/rs6000/mmintrin.h b/trunk/gcc/config/rs6000/mmintrin.h --- a/trunk/gcc/config/rs6000/mmintrin.h (revision 266157) +++ b/trunk/gcc/config/rs6000/mmintrin.h (working copy) @@ -172,8 +172,13 @@ _mm_packs_pi16 (__m64 __m1, __m64 __m2) __vector signed short vm1; __vector signed char vresult; - vm1 = (__vector signed short) (__vector unsigned long long) { __m2, __m1 }; - vresult = vec_vpkshss (vm1, vm1); + vm1 = (__vector signed short) (__vector unsigned long long) +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + { __m1, __m2 }; +#else + { __m2, __m1 }; +#endif + vresult = vec_packs (vm1, vm1); return (__m64) ((__vector long long) vresult)[0]; } @@ -192,8 +197,13 @@ _mm_packs_pi32 (__m64 __m1, __m64 __m2) __vector signed int vm1; __vector signed short vresult; - vm1 = (__vector signed int) (__vector unsigned long long) { __m2, __m1 }; - vresult = vec_vpkswss (vm1, vm1); + vm1 = (__vector signed int) (__vector unsigned long long) +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + { __m1, __m2 }; +#else + { __m2, __m1 }; +#endif + vresult = vec_packs (vm1, vm1); return (__m64) ((__vector long long) vresult)[0]; } @@ -209,12 +219,19 @@ _m_packssdw (__m64 __m1, __m64 __m2) extern __inline __m64 __attribute__((__gnu_inline__, __always_inline__, __artificial__)) _mm_packs_pu16 (__m64 __m1, __m64 __m2) { - __vector signed short vm1; - __vector unsigned char vresult; - - vm1 = (__vector signed short) (__vector unsigned long long) { __m2, __m1 }; - vresult = vec_vpkshus (vm1, vm1); - return (__m64) ((__vector long long) vresult)[0]; + __vector unsigned char r; + __vector signed short vm1 = (__vector signed short) (__vector long long) +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + { __m1, __m2 }; +#else + { __m2, __m1 }; +#endif + const __vector signed short __zero = { 0 }; + __vector __bool short __select = vec_cmplt (vm1, __zero); + r = vec_packs ((vector unsigned short) vm1, (vector unsigned short) vm1); + __vector __bool char packsel = vec_pack (__select, __select); + r = vec_sel (r, (const vector unsigned char) __zero, packsel); + return (__m64) ((__vector long long) r)[0]; } extern __inline __m64 __attribute__((__gnu_inline__, __always_inline__, __artificial__)) @@ -235,7 +252,7 @@ _mm_unpackhi_pi8 (__m64 __m1, __m64 __m2) a = (__vector unsigned char)vec_splats (__m1); b = (__vector unsigned char)vec_splats (__m2); c = vec_mergel (a, b); - return (__m64) ((__vector long long) c)[0]; + return (__m64) ((__vector long long) c)[1]; #else __m64_union m1, m2, res; @@ -316,7 +333,7 @@ _mm_unpacklo_pi8 (__m64 __m1, __m64 __m2) a = (__vector unsigned char)vec_splats (__m1); b = (__vector unsigned char)vec_splats (__m2); c = vec_mergel (a, b); - return (__m64) ((__vector long long) c)[1]; + return (__m64) ((__vector long long) c)[0]; #else __m64_union m1, m2, res; @@ -710,7 +727,7 @@ _mm_setzero_si64 (void) extern __inline __m64 __attribute__((__gnu_inline__, __always_inline__, __artificial__)) _mm_cmpeq_pi8 (__m64 __m1, __m64 __m2) { -#ifdef _ARCH_PWR6 +#if defined(_ARCH_PWR6) && defined(__powerpc64__) __m64 res; __asm__( "cmpb %0,%1,%2;\n" @@ -1084,8 +1101,13 @@ _mm_mulhi_pi16 (__m64 __m1, __m64 __m2) __vector signed short c; __vector signed int w0, w1; __vector unsigned char xform1 = { +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ 0x02, 0x03, 0x12, 0x13, 0x06, 0x07, 0x16, 0x17, 0x0A, 0x0B, 0x1A, 0x1B, 0x0E, 0x0F, 0x1E, 0x1F +#else + 0x00, 0x01, 0x10, 0x11, 0x04, 0x05, 0x14, 0x15, + 0x00, 0x01, 0x10, 0x11, 0x04, 0x05, 0x14, 0x15 +#endif }; a = (__vector signed short)vec_splats (__m1); Index: gcc/config/rs6000/pmmintrin.h =================================================================== diff --git a/trunk/gcc/config/rs6000/pmmintrin.h b/trunk/gcc/config/rs6000/pmmintrin.h --- a/trunk/gcc/config/rs6000/pmmintrin.h (revision 266157) +++ b/trunk/gcc/config/rs6000/pmmintrin.h (working copy) @@ -75,18 +75,16 @@ extern __inline __m128 __attribute__((__gnu_inline _mm_hadd_ps (__m128 __X, __m128 __Y) { __vector unsigned char xform2 = { - #ifdef __LITTLE_ENDIAN__ - 0x00, 0x01, 0x02, 0x03, 0x08, 0x09, 0x0A, 0x0B, 0x10, 0x11, 0x12, 0x13, 0x18, 0x19, 0x1A, 0x1B - #elif __BIG_ENDIAN__ - 0x14, 0x15, 0x16, 0x17, 0x1C, 0x1D, 0x1E, 0x1F, 0x04, 0x05, 0x06, 0x07, 0x0C, 0x0D, 0x0E, 0x0F - #endif + 0x00, 0x01, 0x02, 0x03, + 0x08, 0x09, 0x0A, 0x0B, + 0x10, 0x11, 0x12, 0x13, + 0x18, 0x19, 0x1A, 0x1B }; __vector unsigned char xform1 = { - #ifdef __LITTLE_ENDIAN__ - 0x04, 0x05, 0x06, 0x07, 0x0C, 0x0D, 0x0E, 0x0F, 0x14, 0x15, 0x16, 0x17, 0x1C, 0x1D, 0x1E, 0x1F - #elif __BIG_ENDIAN__ - 0x10, 0x11, 0x12, 0x13, 0x18, 0x19, 0x1A, 0x1B, 0x00, 0x01, 0x02, 0x03, 0x08, 0x09, 0x0A, 0x0B - #endif + 0x04, 0x05, 0x06, 0x07, + 0x0C, 0x0D, 0x0E, 0x0F, + 0x14, 0x15, 0x16, 0x17, + 0x1C, 0x1D, 0x1E, 0x1F }; return (__m128) vec_add (vec_perm ((__v4sf) __X, (__v4sf) __Y, xform2), vec_perm ((__v4sf) __X, (__v4sf) __Y, xform1)); @@ -96,18 +94,16 @@ extern __inline __m128 __attribute__((__gnu_inline _mm_hsub_ps (__m128 __X, __m128 __Y) { __vector unsigned char xform2 = { - #ifdef __LITTLE_ENDIAN__ - 0x00, 0x01, 0x02, 0x03, 0x08, 0x09, 0x0A, 0x0B, 0x10, 0x11, 0x12, 0x13, 0x18, 0x19, 0x1A, 0x1B - #elif __BIG_ENDIAN__ - 0x14, 0x15, 0x16, 0x17, 0x1C, 0x1D, 0x1E, 0x1F, 0x04, 0x05, 0x06, 0x07, 0x0C, 0x0D, 0x0E, 0x0F - #endif + 0x00, 0x01, 0x02, 0x03, + 0x08, 0x09, 0x0A, 0x0B, + 0x10, 0x11, 0x12, 0x13, + 0x18, 0x19, 0x1A, 0x1B }; __vector unsigned char xform1 = { - #ifdef __LITTLE_ENDIAN__ - 0x04, 0x05, 0x06, 0x07, 0x0C, 0x0D, 0x0E, 0x0F, 0x14, 0x15, 0x16, 0x17, 0x1C, 0x1D, 0x1E, 0x1F - #elif __BIG_ENDIAN__ - 0x10, 0x11, 0x12, 0x13, 0x18, 0x19, 0x1A, 0x1B, 0x00, 0x01, 0x02, 0x03, 0x08, 0x09, 0x0A, 0x0B - #endif + 0x04, 0x05, 0x06, 0x07, + 0x0C, 0x0D, 0x0E, 0x0F, + 0x14, 0x15, 0x16, 0x17, + 0x1C, 0x1D, 0x1E, 0x1F }; return (__m128) vec_sub (vec_perm ((__v4sf) __X, (__v4sf) __Y, xform2), vec_perm ((__v4sf) __X, (__v4sf) __Y, xform1)); Index: gcc/config/rs6000/xmmintrin.h =================================================================== diff --git a/trunk/gcc/config/rs6000/xmmintrin.h b/trunk/gcc/config/rs6000/xmmintrin.h --- a/trunk/gcc/config/rs6000/xmmintrin.h (revision 266157) +++ b/trunk/gcc/config/rs6000/xmmintrin.h (working copy) @@ -907,17 +907,17 @@ _mm_cvtss_si32 (__m128 __A) { __m64 res = 0; #ifdef _ARCH_PWR8 - __m128 vtmp; double dtmp; __asm__( - "xxsldwi %x1,%x3,%x3,3;\n" - "xscvspdp %x2,%x1;\n" +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + "xxsldwi %x0,%x0,%x0,3;\n" +#endif + "xscvspdp %x2,%x0;\n" "fctiw %2,%2;\n" - "mfvsrd %0,%x2;\n" - : "=r" (res), - "=&wa" (vtmp), + "mfvsrd %1,%x2;\n" + : "+wa" (__A), + "=r" (res), "=f" (dtmp) - : "wa" (__A) : ); #else res = __builtin_rint(__A[0]); @@ -940,17 +940,17 @@ _mm_cvtss_si64 (__m128 __A) { __m64 res = 0; #ifdef _ARCH_PWR8 - __m128 vtmp; double dtmp; __asm__( - "xxsldwi %x1,%x3,%x3,3;\n" - "xscvspdp %x2,%x1;\n" +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + "xxsldwi %x0,%x0,%x0,3;\n" +#endif + "xscvspdp %x2,%x0;\n" "fctid %2,%2;\n" - "mfvsrd %0,%x2;\n" - : "=r" (res), - "=&wa" (vtmp), + "mfvsrd %1,%x2;\n" + : "+wa" (__A), + "=r" (res), "=f" (dtmp) - : "wa" (__A) : ); #else res = __builtin_llrint(__A[0]); @@ -1148,7 +1148,12 @@ _mm_cvtpu16_ps (__m64 __A) __vector float vf1; vs8 = (__vector unsigned short) (__vector unsigned long long) { __A, __A }; - vi4 = (__vector unsigned int) vec_vmrglh (vs8, zero); + vi4 = (__vector unsigned int) vec_mergel +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + (vs8, zero); +#else + (zero, vs8); +#endif vf1 = (__vector float) vec_ctf (vi4, 0); return (__m128) vf1; @@ -1184,9 +1189,15 @@ _mm_cvtpu8_ps (__m64 __A) __vector float vf1; vc16 = (__vector unsigned char) (__vector unsigned long long) { __A, __A }; - vs8 = (__vector unsigned short) vec_vmrglb (vc16, zero); - vi4 = (__vector unsigned int) vec_vmrghh (vs8, +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + vs8 = (__vector unsigned short) vec_mergel (vc16, zero); + vi4 = (__vector unsigned int) vec_mergeh (vs8, (__vector unsigned short) zero); +#else + vs8 = (__vector unsigned short) vec_mergel (zero, vc16); + vi4 = (__vector unsigned int) vec_mergeh ((__vector unsigned short) zero, + vs8); +#endif vf1 = (__vector float) vec_ctf (vi4, 0); return (__m128) vf1; @@ -1199,7 +1210,7 @@ _mm_cvtpi32x2_ps (__m64 __A, __m64 __B) __vector signed int vi4; __vector float vf4; - vi4 = (__vector signed int) (__vector unsigned long long) { __B, __A }; + vi4 = (__vector signed int) (__vector unsigned long long) { __A, __B }; vf4 = (__vector float) vec_ctf (vi4, 0); return (__m128) vf4; } @@ -1250,22 +1261,15 @@ _mm_shuffle_ps (__m128 __A, __m128 __B, int cons #ifdef __LITTLE_ENDIAN__ 0x03020100, 0x07060504, 0x0B0A0908, 0x0F0E0D0C #elif __BIG_ENDIAN__ - 0x0C0D0E0F, 0x08090A0B, 0x04050607, 0x00010203 + 0x00010203, 0x04050607, 0x08090A0B, 0x0C0D0E0F #endif }; __vector unsigned int t; -#ifdef __LITTLE_ENDIAN__ t[0] = permute_selectors[element_selector_10]; t[1] = permute_selectors[element_selector_32]; t[2] = permute_selectors[element_selector_54] + 0x10101010; t[3] = permute_selectors[element_selector_76] + 0x10101010; -#elif __BIG_ENDIAN__ - t[3] = permute_selectors[element_selector_10] + 0x10101010; - t[2] = permute_selectors[element_selector_32] + 0x10101010; - t[1] = permute_selectors[element_selector_54]; - t[0] = permute_selectors[element_selector_76]; -#endif return vec_perm ((__v4sf) __A, (__v4sf)__B, (__vector unsigned char)t); } @@ -1573,8 +1577,12 @@ _m_pminub (__m64 __A, __m64 __B) extern __inline int __attribute__((__gnu_inline__, __always_inline__, __artificial__)) _mm_movemask_pi8 (__m64 __A) { - unsigned long long p = 0x0008101820283038UL; // permute control for sign bits - + unsigned long long p = +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + 0x0008101820283038UL; // permute control for sign bits +#else + 0x3830282018100800UL; // permute control for sign bits +#endif return __builtin_bpermd (p, __A); } @@ -1593,8 +1601,13 @@ _mm_mulhi_pu16 (__m64 __A, __m64 __B) __vector unsigned short c; __vector unsigned int w0, w1; __vector unsigned char xform1 = { +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ 0x02, 0x03, 0x12, 0x13, 0x06, 0x07, 0x16, 0x17, 0x0A, 0x0B, 0x1A, 0x1B, 0x0E, 0x0F, 0x1E, 0x1F +#else + 0x00, 0x01, 0x10, 0x11, 0x04, 0x05, 0x14, 0x15, + 0x00, 0x01, 0x10, 0x11, 0x04, 0x05, 0x14, 0x15 +#endif }; a = (__vector unsigned short)vec_splats (__A); @@ -1725,7 +1738,7 @@ _mm_sad_pu8 (__m64 __A, __m64 __B) __vector signed int vsum; const __vector unsigned int zero = { 0, 0, 0, 0 }; - unsigned short result; + __m64_union result = {0}; a = (__vector unsigned char) (__vector unsigned long long) { 0UL, __A }; b = (__vector unsigned char) (__vector unsigned long long) { 0UL, __B }; @@ -1738,8 +1751,8 @@ _mm_sad_pu8 (__m64 __A, __m64 __B) vsum = vec_sums (vsum, (__vector signed int) zero); /* The sum is in the right most 32-bits of the vector result. Transfer to a GPR and truncate to 16 bits. */ - result = vsum[3]; - return (result); + result.as_short[0] = vsum[3]; + return result.as_m64; } extern __inline __m64 __attribute__((__gnu_inline__, __always_inline__, __artificial__)) From patchwork Tue Dec 4 14:59:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul A. Clarke" X-Patchwork-Id: 1007688 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-491626-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=us.ibm.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="vmN93Rk2"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 438Q5H25lhz9s7W for ; Wed, 5 Dec 2018 02:00:03 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:date:message-id:content-type :content-transfer-encoding:mime-version; q=dns; s=default; b=Ol1 28niAO/VnUm+IoJWHtzAf4+gFWuReZvc5jyxRrlb1bLcCPMcNVWu6EFNjiCy8Hse +BieKd5Eapx58ZDYjFR7Ivrct+yjMcrquOfM9NdoxTxEBDgobKZhbmqkxx9BBCUR yP3w2mIXvj4DIgG5sXye+WLyxNNqxdtQUMgJDcQc= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:date:message-id:content-type :content-transfer-encoding:mime-version; s=default; bh=IMJcRmxMK z9BBkt3W5m93w49LvA=; b=vmN93Rk2Psuw+iT8oFWpI+Z2eiEsrqe0KVBuDJYWv C3ugyfBppe3fVqiP3RfdBfjeO3bDOkDKT62eV9SzDoZz8QJkV7iEPxIEKelKh6LT AsURXpg4IhrAZJDAxrpL1YPzlbBem6XnIzv6jCV+BYSXrSUh/bZtF1UOha06YeWX TI= Received: (qmail 4447 invoked by alias); 4 Dec 2018 14:59:55 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 4420 invoked by uid 89); 4 Dec 2018 14:59:54 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-21.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, KAM_SHORT, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 spammy=ux, ea, therein, 3.1 X-HELO: mx0a-001b2d01.pphosted.com Received: from mx0b-001b2d01.pphosted.com (HELO mx0a-001b2d01.pphosted.com) (148.163.158.5) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 04 Dec 2018 14:59:49 +0000 Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wB4Ex5XL022850 for ; Tue, 4 Dec 2018 09:59:46 -0500 Received: from e33.co.us.ibm.com (e33.co.us.ibm.com [32.97.110.151]) by mx0b-001b2d01.pphosted.com with ESMTP id 2p5utq81yf-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 04 Dec 2018 09:59:45 -0500 Received: from localhost by e33.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 4 Dec 2018 14:59:45 -0000 Received: from b03cxnp08028.gho.boulder.ibm.com (9.17.130.20) by e33.co.us.ibm.com (192.168.1.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 4 Dec 2018 14:59:43 -0000 Received: from b03ledav005.gho.boulder.ibm.com (b03ledav005.gho.boulder.ibm.com [9.17.130.236]) by b03cxnp08028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wB4Exg7S22544600 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 4 Dec 2018 14:59:42 GMT Received: from b03ledav005.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 03E12BE04F; Tue, 4 Dec 2018 14:59:42 +0000 (GMT) Received: from b03ledav005.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7883CBE054; Tue, 4 Dec 2018 14:59:41 +0000 (GMT) Received: from oc3272150783.ibm.com (unknown [9.41.247.5]) by b03ledav005.gho.boulder.ibm.com (Postfix) with ESMTPS; Tue, 4 Dec 2018 14:59:41 +0000 (GMT) To: gcc-patches@gcc.gnu.org, Segher Boessenkool From: Paul Clarke Subject: [PATCH 2/3][rs6000] Fix x86-compat vector intrinsics testcases for BE, 32bit Date: Tue, 4 Dec 2018 08:59:40 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 x-cbid: 18120414-0036-0000-0000-00000A655306 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00010170; HX=3.00000242; KW=3.00000007; PH=3.00000004; SC=3.00000270; SDB=6.01126889; UDB=6.00585279; IPR=6.00907024; MB=3.00024442; MTD=3.00000008; XFM=3.00000015; UTC=2018-12-04 14:59:44 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18120414-0037-0000-0000-000049DBE36E Message-Id: MIME-Version: 1.0 Fix general endian issues found in the test cases for the compatibility implementations of the x86 vector intrinsics. (The tests had been inadvertently made to PASS without actually running the test code. A later patch fixes this issue.) Additionally, a new is added, as some of the APIs therein are now used by the test cases. It is _not_ a complete implementation of the SSE4 interfaces, only the few "extract" interfaces uses by the tests. 2018-12-03 Paul A. Clarke [gcc] PR target/88316 * config/rs6000/smmintrin.h: New file. * config.gcc: add smmintrin.h to extra_headers for powerpc*-*-*. [gcc/testsuite] PR target/88316 * gcc.target/powerpc/mmx-packssdw-1.c: Fixes for big-endian. * gcc.target/powerpc/mmx-packsswb-1.c: Likewise. * gcc.target/powerpc/mmx-packuswb-1.c: Likewise. * gcc.target/powerpc/mmx-pmulhw-1.c: Likewise. * gcc.target/powerpc/sse-cvtpi32x2ps-1.c: Likewise. * gcc.target/powerpc/sse-cvtpu16ps-1.c: Likewise. * gcc.target/powerpc/sse-cvtss2si-1.c: Likewise. * gcc.target/powerpc/sse-cvtss2si-2.c: Likewise. * gcc.target/powerpc/sse2-pshufhw-1.c: Likewise. * gcc.target/powerpc/sse2-pshuflw-1.c: Likewise. --- PC Index: gcc/config/rs6000/smmintrin.h =================================================================== diff --git a/trunk/gcc/config/rs6000/smmintrin.h b/trunk/gcc/config/rs6000/smmintrin.h new file mode 10644 --- /dev/null (revision 0) +++ b/trunk/gcc/config/rs6000/smmintrin.h (working copy) @@ -0,0 +1,67 @@ +/* Copyright (C) 2018 Free Software Foundation, Inc. + + This file is part of GCC. + + GCC is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3, or (at your option) + any later version. + + GCC is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + Under Section 7 of GPL version 3, you are granted additional + permissions described in the GCC Runtime Library Exception, version + 3.1, as published by the Free Software Foundation. + + You should have received a copy of the GNU General Public License and + a copy of the GCC Runtime Library Exception along with this program; + see the files COPYING3 and COPYING.RUNTIME respectively. If not, see + . */ + +/* Implemented from the specification included in the Intel C++ Compiler + User Guide and Reference, version 9.0. */ + +#ifndef NO_WARN_X86_INTRINSICS +/* This header is distributed to simplify porting x86_64 code that + makes explicit use of Intel intrinsics to powerpc64le. + It is the user's responsibility to determine if the results are + acceptable and make additional changes as necessary. + Note that much code that uses Intel intrinsics can be rewritten in + standard C or GNU C extensions, which are more portable and better + optimized across multiple targets. */ +#endif + +#ifndef SMMINTRIN_H_ +#define SMMINTRIN_H_ + +#include +#include + +extern __inline int __attribute__((__gnu_inline__, __always_inline__, __artificial__)) +_mm_extract_epi8 (__m128i __X, const int __N) +{ + return (unsigned char) ((__v16qi)__X)[__N & 15]; +} + +extern __inline int __attribute__((__gnu_inline__, __always_inline__, __artificial__)) +_mm_extract_epi32 (__m128i __X, const int __N) +{ + return ((__v4si)__X)[__N & 3]; +} + +extern __inline int __attribute__((__gnu_inline__, __always_inline__, __artificial__)) +_mm_extract_epi64 (__m128i __X, const int __N) +{ + return ((__v2di)__X)[__N & 1]; +} + +extern __inline int __attribute__((__gnu_inline__, __always_inline__, __artificial__)) +_mm_extract_ps (__m128 __X, const int __N) +{ + return ((__v4si)__X)[__N & 3]; +} + +#endif Index: gcc/config.gcc =================================================================== diff --git a/trunk/gcc/config.gcc b/trunk/gcc/config.gcc --- a/trunk/gcc/config.gcc (revision 266157) +++ b/trunk/gcc/config.gcc (working copy) @@ -504,7 +504,7 @@ powerpc*-*-*) extra_headers="${extra_headers} bmi2intrin.h bmiintrin.h" extra_headers="${extra_headers} xmmintrin.h mm_malloc.h emmintrin.h" extra_headers="${extra_headers} mmintrin.h x86intrin.h" - extra_headers="${extra_headers} pmmintrin.h tmmintrin.h" + extra_headers="${extra_headers} pmmintrin.h tmmintrin.h smmintrin.h" extra_headers="${extra_headers} ppu_intrinsics.h spu2vmx.h vec_types.h si2vmx.h" extra_headers="${extra_headers} amo.h" case x$with_cpu in Index: gcc/testsuite/gcc.target/powerpc/mmx-packssdw-1.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packssdw-1.c b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packssdw-1.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packssdw-1.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packssdw-1.c (working copy) @@ -22,37 +22,50 @@ test (__m64 s1, __m64 s2) return _mm_packs_pi32 (s1, s2); } +static short +saturate (int val) +{ + if (val > 32767) + return 32767; + else if (val < -32768) + return -32768; + else + return val; +} + +static inline int +l_mm_extract_pi32 (__m64 b, int imm8) +{ + unsigned int shift = imm8 & 0x1; +#ifdef __BIG_ENDIAN__ + shift = 1 - shift; +#endif + return ((long long)b >> (shift * 32)) & 0xffffffff; +} + static void TEST (void) { __m64_union s1, s2; __m64_union u; __m64_union e; - int i; + int start, end, inc; s1.as_m64 = _mm_set_pi32 (2134, -128); s2.as_m64 = _mm_set_pi32 (41124, 234); u.as_m64 = test (s1.as_m64, s2.as_m64); - for (i = 0; i < 2; i++) - { - if (s1.as_int[i] > 32767) - e.as_short[i] = 32767; - else if (s1.as_int[i] < -32768) - e.as_short[i] = -32768; - else - e.as_short[i] = s1.as_int[i]; - } - - for (i = 0; i < 2; i++) - { - if (s2.as_int[i] > 32767) - e.as_short[i+2] = 32767; - else if (s2.as_int[i] < -32768) - e.as_short[i+2] = -32768; - else - e.as_short[i+2] = s2.as_int[i]; - } +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + e.as_m64 = _mm_set_pi16 (saturate (l_mm_extract_pi32 (s2.as_m64, 1)), + saturate (l_mm_extract_pi32 (s2.as_m64, 0)), + saturate (l_mm_extract_pi32 (s1.as_m64, 1)), + saturate (l_mm_extract_pi32 (s1.as_m64, 0))); +#else + e.as_m64 = _mm_set_pi16 (saturate (l_mm_extract_pi32 (s1.as_m64, 1)), + saturate (l_mm_extract_pi32 (s1.as_m64, 0)), + saturate (l_mm_extract_pi32 (s2.as_m64, 1)), + saturate (l_mm_extract_pi32 (s2.as_m64, 0))); +#endif if (u.as_m64 != e.as_m64) abort (); Index: gcc/testsuite/gcc.target/powerpc/mmx-packsswb-1.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packsswb-1.c b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packsswb-1.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packsswb-1.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packsswb-1.c (working copy) @@ -14,6 +14,7 @@ #include CHECK_H #include +#include static __m64 __attribute__((noinline, unused)) @@ -22,6 +23,17 @@ test (__m64 s1, __m64 s2) return _mm_packs_pi16 (s1, s2); } +static signed char +saturate (signed short val) +{ + if (val > 127) + return 127; + else if (val < -128) + return -128; + else + return val; +} + static void TEST (void) { @@ -34,25 +46,25 @@ TEST (void) s2.as_m64 = _mm_set_pi16 (41124, 234, 2344, 2354); u.as_m64 = test (s1.as_m64, s2.as_m64); - for (i = 0; i < 4; i++) - { - if (s1.as_short[i] > 127) - e.as_char[i] = 127; - else if (s1.as_short[i] < -128) - e.as_char[i] = -128; - else - e.as_char[i] = s1.as_short[i]; - } - - for (i = 0; i < 4; i++) - { - if (s2.as_short[i] > 127) - e.as_char[i+4] = 127; - else if (s2.as_short[i] < -128) - e.as_char[i+4] = -128; - else - e.as_char[i+4] = s2.as_short[i]; - } +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + e.as_m64 = _mm_set_pi8 (saturate (_mm_extract_pi16 (s2.as_m64, 3)), + saturate (_mm_extract_pi16 (s2.as_m64, 2)), + saturate (_mm_extract_pi16 (s2.as_m64, 1)), + saturate (_mm_extract_pi16 (s2.as_m64, 0)), + saturate (_mm_extract_pi16 (s1.as_m64, 3)), + saturate (_mm_extract_pi16 (s1.as_m64, 2)), + saturate (_mm_extract_pi16 (s1.as_m64, 1)), + saturate (_mm_extract_pi16 (s1.as_m64, 0))); +#else + e.as_m64 = _mm_set_pi8 (saturate (_mm_extract_pi16 (s1.as_m64, 3)), + saturate (_mm_extract_pi16 (s1.as_m64, 2)), + saturate (_mm_extract_pi16 (s1.as_m64, 1)), + saturate (_mm_extract_pi16 (s1.as_m64, 0)), + saturate (_mm_extract_pi16 (s2.as_m64, 3)), + saturate (_mm_extract_pi16 (s2.as_m64, 2)), + saturate (_mm_extract_pi16 (s2.as_m64, 1)), + saturate (_mm_extract_pi16 (s2.as_m64, 0))); +#endif if (u.as_m64 != e.as_m64) abort (); Index: gcc/testsuite/gcc.target/powerpc/mmx-packuswb-1.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packuswb-1.c b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packuswb-1.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packuswb-1.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-packuswb-1.c (working copy) @@ -15,6 +15,7 @@ #include CHECK_H #include +#include static __m64 __attribute__((noinline, unused)) @@ -23,6 +24,17 @@ test (__m64 s1, __m64 s2) return _mm_packs_pu16 (s1, s2); } +static unsigned char +saturate (signed short val) +{ + if (val > 255) + return 255; + else if (val < 0) + return 0; + else + return val; +} + static void TEST (void) { @@ -35,17 +47,26 @@ TEST (void) s2.as_m64 = _mm_set_pi16 (-9, -10, -11, -12); u.as_m64 = test (s1.as_m64, s2.as_m64); - for (i=0; i<4; i++) - { - tmp = s1.as_short[i]<0 ? 0 : s1.as_short[i]; - tmp = tmp>255 ? 255 : tmp; - e.as_char[i] = tmp; +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + e.as_m64 = _mm_set_pi8 (saturate (_mm_extract_pi16 (s2.as_m64, 3)), + saturate (_mm_extract_pi16 (s2.as_m64, 2)), + saturate (_mm_extract_pi16 (s2.as_m64, 1)), + saturate (_mm_extract_pi16 (s2.as_m64, 0)), + saturate (_mm_extract_pi16 (s1.as_m64, 3)), + saturate (_mm_extract_pi16 (s1.as_m64, 2)), + saturate (_mm_extract_pi16 (s1.as_m64, 1)), + saturate (_mm_extract_pi16 (s1.as_m64, 0))); +#else + e.as_m64 = _mm_set_pi8 (saturate (_mm_extract_pi16 (s1.as_m64, 3)), + saturate (_mm_extract_pi16 (s1.as_m64, 2)), + saturate (_mm_extract_pi16 (s1.as_m64, 1)), + saturate (_mm_extract_pi16 (s1.as_m64, 0)), + saturate (_mm_extract_pi16 (s2.as_m64, 3)), + saturate (_mm_extract_pi16 (s2.as_m64, 2)), + saturate (_mm_extract_pi16 (s2.as_m64, 1)), + saturate (_mm_extract_pi16 (s2.as_m64, 0))); +#endif - tmp = s2.as_short[i]<0 ? 0 : s2.as_short[i]; - tmp = tmp>255 ? 255 : tmp; - e.as_char[i+4] = tmp; - } - if (u.as_m64 != e.as_m64) abort (); } Index: gcc/testsuite/gcc.target/powerpc/mmx-pmulhw-1.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-pmulhw-1.c b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-pmulhw-1.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-pmulhw-1.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-pmulhw-1.c (working copy) @@ -33,13 +33,12 @@ TEST (void) s2.as_m64 = _mm_set_pi16 (11, 9834, 7444, -10222); u.as_m64 = test (s1.as_m64, s2.as_m64); - for (i = 0; i < 4; i++) - { - tmp = s1.as_short[i] * s2.as_short[i]; + e.as_m64 = _mm_set_pi16 ( + ((s1.as_short[3] * s2.as_short[3]) & 0xffff0000) >> 16, + ((s1.as_short[2] * s2.as_short[2]) & 0xffff0000) >> 16, + ((s1.as_short[1] * s2.as_short[1]) & 0xffff0000) >> 16, + ((s1.as_short[0] * s2.as_short[0]) & 0xffff0000) >> 16); - e.as_short[i] = (tmp & 0xffff0000)>>16; - } - if (u.as_m64 != e.as_m64) abort (); } Index: gcc/testsuite/gcc.target/powerpc/sse-cvtpi32x2ps-1.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtpi32x2ps-1.c b/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtpi32x2ps-1.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtpi32x2ps-1.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtpi32x2ps-1.c (working copy) @@ -27,8 +27,8 @@ static void TEST (void) { __m64_union s1, s2; - union128 u; - float e[4] = {1000.0, -20000.0, 43.0, 546.0}; + union128 u, e; + e.x = _mm_set_ps (546.0, 43.0, -20000.0, 1000.0); /* input signed in {1000, -20000, 43, 546}. */ s1.as_m64 = _mm_setr_pi32 (1000, -20000); @@ -37,6 +37,6 @@ TEST (void) u.x = test (s1.as_m64, s2.as_m64); - if (check_union128 (u, e)) + if (check_union128 (u, e.a)) abort (); } Index: gcc/testsuite/gcc.target/powerpc/sse-cvtpu16ps-1.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtpu16ps-1.c b/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtpu16ps-1.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtpu16ps-1.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtpu16ps-1.c (working copy) @@ -27,14 +27,14 @@ static void TEST (void) { __m64_union s1; - union128 u; - float e[4] = {1000.0, 45536.0, 45.0, 65535.0}; + union128 u, e; + e.x = _mm_set_ps (65535.0, 45.0, 45536.0, 1000.0); /* input unsigned short {1000, 45536, 45, 65535}. */ s1.as_m64 = _mm_setr_pi16 (1000, -20000, 45, -1); u.x = test (s1.as_m64); - if (check_union128 (u, e)) + if (check_union128 (u, e.a)) abort (); } Index: gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-1.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-1.c b/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-1.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-1.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-1.c (working copy) @@ -15,6 +15,7 @@ #endif #include +#include static int __attribute__((noinline, unused)) @@ -29,12 +30,17 @@ TEST (void) { union128 s1; int d; - int e; + union { + float f; + int i; + } e; s1.x = _mm_set_ps (24.43, 68.346, 43.35, 546.46); d = test (s1.x); - e = (int)s1.a[0]; - if (e != d) + e.i = _mm_extract_ps (s1.x, 0); + e.i = e.f; + + if (e.i != d) abort (); } Index: gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-2.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-2.c b/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-2.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-2.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/sse-cvtss2si-2.c (working copy) @@ -15,6 +15,7 @@ #endif #include +#include static long long __attribute__((noinline, unused)) @@ -29,11 +30,17 @@ TEST (void) union128 s1; long long d; long long e; + union { + float f; + int i; + } u; s1.x = _mm_set_ps (344.4, 68.346, 43.35, 429496729501.4); d = test (s1.x); - e = (long long)s1.a[0]; + u.i = _mm_extract_ps (s1.x, 0); + e = u.f; + if (e != d) abort (); } Index: gcc/testsuite/gcc.target/powerpc/sse2-pshufhw-1.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/sse2-pshufhw-1.c b/trunk/gcc/testsuite/gcc.target/powerpc/sse2-pshufhw-1.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/sse2-pshufhw-1.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/sse2-pshufhw-1.c (working copy) @@ -26,24 +26,28 @@ test (__m128i s1) static void TEST (void) { - union128i_q s1; - union128i_w u; + union128i_w s1, u; short e[8] = { 0 }; int i; int m1[4] = { 0x3, 0x3<<2, 0x3<<4, 0x3<<6 }; int m2[4]; - s1.x = _mm_set_epi64x (0xabcde,0xef58a234); + s1.x = _mm_set_epi16 (0, 0, 0xa, 0xbcde, 0, 0, 0xef58, 0xa234); u.x = test (s1.x); for (i = 0; i < 4; i++) - e[i] = (s1.a[0]>>(16 * i)) & 0xffff; + e[i] = s1.a[i]; - for (i = 0; i < 4; i++) - m2[i] = (N & m1[i])>>(2*i); + for (i = 0; i < 4; i++) { + int i2 = i; +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ + i2 = 3 - i; +#endif + m2[i2] = (N & m1[i2]) >> (2 * i2); + } for (i = 0; i < 4; i++) - e[i+4] = (s1.a[1] >> (16 * m2[i])) & 0xffff; + e[i + 4] = s1.a[m2[i] + 4]; if (check_union128i_w(u, e)) { Index: gcc/testsuite/gcc.target/powerpc/sse2-pshuflw-1.c =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/sse2-pshuflw-1.c b/trunk/gcc/testsuite/gcc.target/powerpc/sse2-pshuflw-1.c --- a/trunk/gcc/testsuite/gcc.target/powerpc/sse2-pshuflw-1.c (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/sse2-pshuflw-1.c (working copy) @@ -26,24 +26,28 @@ test (__m128i s1) static void TEST (void) { - union128i_q s1; - union128i_w u; + union128i_w s1, u; short e[8] = { 0 }; int i; int m1[4] = { 0x3, 0x3<<2, 0x3<<4, 0x3<<6 }; int m2[4]; - s1.x = _mm_set_epi64x (0xabcde,0xef58a234); + s1.x = _mm_set_epi16 (0, 0, 0xa, 0xbcde, 0, 0, 0xef58, 0xa234); u.x = test (s1.x); for (i = 0; i < 4; i++) - e[i+4] = (s1.a[1]>>(16 * i)) & 0xffff; + e[i + 4] = s1.a[i + 4]; - for (i = 0; i < 4; i++) - m2[i] = (N & m1[i])>>(2*i); + for (i = 0; i < 4; i++) { + int i2 = i; +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ + i2 = 3 - i; +#endif + m2[i2] = (N & m1[i2]) >> (2 * i2); + } for (i = 0; i < 4; i++) - e[i] = (s1.a[0] >> (16 * m2[i])) & 0xffff; + e[i] = s1.a[m2[i]]; if (check_union128i_w(u, e)) { From patchwork Tue Dec 4 15:00:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul A. Clarke" X-Patchwork-Id: 1007689 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-491627-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=us.ibm.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="Z7Xw2KBB"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 438Q723qvJz9s7W for ; Wed, 5 Dec 2018 02:01:34 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:date:mime-version:message-id:content-type :content-transfer-encoding; q=dns; s=default; b=sQlvDwppjax0+P+n L2h0rBzfc/xYCORVlnKxLhchSIBbSp5f1Fb8JKj/KgNv9muBEQ1dCxPt8qrYeNGf DaC6M3WWX6HlbCvzrMUgB6Pr67Psu2fny+CTT8x1/lxwb7IyTRz/2KYafFcCUPJR bDgvwxzN6E74vbJsI5EmWKuwkgk= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:date:mime-version:message-id:content-type :content-transfer-encoding; s=default; bh=RTw2B1HgRrX/b6VJn015aV 986nc=; b=Z7Xw2KBBQBTm96CFE9cQlAhEsGFq5G9modVJ78AjXMTR/k0PBfGG+I hnhx07lonxmTA8hwi/CFpwE5QVqk/xE0YfyNmg7OUpZ7IkXk/N1TqKv1m5wKFNLS DXljOwmz6WBjOu7E1hZjaiQjABnZxUvXBri3zSsqOGqq//4ktQOOQ= Received: (qmail 7873 invoked by alias); 4 Dec 2018 15:01:22 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 6950 invoked by uid 89); 4 Dec 2018 15:00:50 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-21.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=unavailable version=3.3.2 spammy=transferring, 1319, 98, PASSED X-HELO: mx0a-001b2d01.pphosted.com Received: from mx0a-001b2d01.pphosted.com (HELO mx0a-001b2d01.pphosted.com) (148.163.156.1) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 04 Dec 2018 15:00:30 +0000 Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wB4EwxPp088785 for ; Tue, 4 Dec 2018 10:00:24 -0500 Received: from e32.co.us.ibm.com (e32.co.us.ibm.com [32.97.110.150]) by mx0a-001b2d01.pphosted.com with ESMTP id 2p5tw4umhv-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 04 Dec 2018 10:00:23 -0500 Received: from localhost by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 4 Dec 2018 15:00:23 -0000 Received: from b03cxnp08027.gho.boulder.ibm.com (9.17.130.19) by e32.co.us.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Tue, 4 Dec 2018 15:00:20 -0000 Received: from b03ledav005.gho.boulder.ibm.com (b03ledav005.gho.boulder.ibm.com [9.17.130.236]) by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wB4F0J1c5701740 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 4 Dec 2018 15:00:19 GMT Received: from b03ledav005.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A2DA7BE04F; Tue, 4 Dec 2018 15:00:19 +0000 (GMT) Received: from b03ledav005.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 500B8BE062; Tue, 4 Dec 2018 15:00:19 +0000 (GMT) Received: from oc3272150783.ibm.com (unknown [9.41.247.5]) by b03ledav005.gho.boulder.ibm.com (Postfix) with ESMTPS; Tue, 4 Dec 2018 15:00:19 +0000 (GMT) To: gcc-patches@gcc.gnu.org, Segher Boessenkool From: Paul Clarke Subject: [PATCH 3/3][rs6000] Enable x86-compat vector intrinsics testing Date: Tue, 4 Dec 2018 09:00:18 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 x-cbid: 18120415-0004-0000-0000-000014BCFD2A X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00010170; HX=3.00000242; KW=3.00000007; PH=3.00000004; SC=3.00000270; SDB=6.01126889; UDB=6.00585278; IPR=6.00907024; MB=3.00024442; MTD=3.00000008; XFM=3.00000015; UTC=2018-12-04 15:00:21 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18120415-0005-0000-0000-000089BC8692 Message-Id: <4d64efdb-637d-7c4c-ac7f-bed526e3a689@us.ibm.com> The testsuite tests for the compatibility implementations of x86 vector intrinsics for "powerpc" had been inadvertently made to PASS without actually running the test code. This patch removes the code which kept the tests from running the actual test code. 2018-12-03 Paul A. Clarke [gcc/testsuite] PR target/88316 * gcc.target/powerpc/bmi-check.h: Remove test for __BUILTIN_CPU_SUPPORTS__, thereby enabling test code to run. * gcc.target/powerpc/bmi2-check.h: Likewise. * gcc.target/powerpc/mmx-check.h: Likewise. * gcc.target/powerpc/sse-check.h: Likewise. * gcc.target/powerpc/sse2-check.h: Likewise. * gcc.target/powerpc/sse3-check.h: Likewise. * gcc.target/powerpc/ssse3-check.h: Likewise. --- PC Index: gcc/testsuite/gcc.target/powerpc/bmi-check.h =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/bmi-check.h b/trunk/gcc/testsuite/gcc.target/powerpc/bmi-check.h --- a/trunk/gcc/testsuite/gcc.target/powerpc/bmi-check.h (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/bmi-check.h (working copy) @@ -13,19 +13,9 @@ do_test (void) int main () { -#ifdef __BUILTIN_CPU_SUPPORTS__ - /* Need 64-bit for 64-bit longs as single instruction. */ - if ( __builtin_cpu_supports ("ppc64") ) - { - do_test (); + do_test (); #ifdef DEBUG - printf ("PASSED\n"); + printf ("PASSED\n"); #endif - } -#ifdef DEBUG - else - printf ("SKIPPED\n"); -#endif -#endif /* __BUILTIN_CPU_SUPPORTS__ */ return 0; } Index: gcc/testsuite/gcc.target/powerpc/bmi2-check.h =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/bmi2-check.h b/trunk/gcc/testsuite/gcc.target/powerpc/bmi2-check.h --- a/trunk/gcc/testsuite/gcc.target/powerpc/bmi2-check.h (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/bmi2-check.h (working copy) @@ -13,22 +13,10 @@ do_test (void) int main () { -#ifdef __BUILTIN_CPU_SUPPORTS__ - /* The BMI2 test for pext test requires the Bit Permute doubleword - (bpermd) instruction added in PowerISA 2.06 along with the VSX - facility. So we can test for arch_2_06. */ - if ( __builtin_cpu_supports ("arch_2_06") ) - { - do_test (); + do_test (); #ifdef DEBUG - printf ("PASSED\n"); + printf ("PASSED\n"); #endif - } -#ifdef DEBUG - else - printf ("SKIPPED\n"); -#endif -#endif /* __BUILTIN_CPU_SUPPORTS__ */ return 0; } Index: gcc/testsuite/gcc.target/powerpc/mmx-check.h =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-check.h b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-check.h --- a/trunk/gcc/testsuite/gcc.target/powerpc/mmx-check.h (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/mmx-check.h (working copy) @@ -13,23 +13,9 @@ do_test (void) int main () { -#ifdef __BUILTIN_CPU_SUPPORTS__ - /* Many MMX intrinsics are simpler / faster to implement by - transferring the __m64 (long int) to vector registers for SIMD - operations. To be efficient we also need the direct register - transfer instructions from POWER8. So we can test for - arch_2_07. */ - if ( __builtin_cpu_supports ("arch_2_07") ) - { - do_test (); + do_test (); #ifdef DEBUG - printf ("PASSED\n"); + printf ("PASSED\n"); #endif - } -#ifdef DEBUG - else - printf ("SKIPPED\n"); -#endif -#endif /* __BUILTIN_CPU_SUPPORTS__ */ return 0; } Index: gcc/testsuite/gcc.target/powerpc/sse-check.h =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/sse-check.h b/trunk/gcc/testsuite/gcc.target/powerpc/sse-check.h --- a/trunk/gcc/testsuite/gcc.target/powerpc/sse-check.h (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/sse-check.h (working copy) @@ -1,7 +1,7 @@ #include #include "m128-check.h" -#define DEBUG 1 +// #define DEBUG 1 #define TEST sse_test @@ -17,25 +17,10 @@ do_test (void) int main () { -#ifdef __BUILTIN_CPU_SUPPORTS__ - /* Most SSE intrinsic operations can be implemented via VMX - instructions, but some operations may be faster / simpler - using the POWER8 VSX instructions. This is especially true - when we are transferring / converting to / from __m64 types. - The direct register transfer instructions from POWER8 are - especially important. So we test for arch_2_07. */ - if ( __builtin_cpu_supports ("arch_2_07") ) - { - do_test (); + do_test (); #ifdef DEBUG - printf ("PASSED\n"); + printf ("PASSED\n"); #endif - } -#ifdef DEBUG - else - printf ("SKIPPED\n"); -#endif -#endif /* __BUILTIN_CPU_SUPPORTS__ */ return 0; } Index: gcc/testsuite/gcc.target/powerpc/sse2-check.h =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/sse2-check.h b/trunk/gcc/testsuite/gcc.target/powerpc/sse2-check.h --- a/trunk/gcc/testsuite/gcc.target/powerpc/sse2-check.h (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/sse2-check.h (working copy) @@ -9,8 +9,6 @@ /* define DEBUG replace abort with printf on error. */ //#define DEBUG 1 -#if 1 - #define TEST sse2_test static void sse2_test (void); @@ -25,28 +23,9 @@ do_test (void) int main () { -#ifdef __BUILTIN_CPU_SUPPORTS__ - /* Most SSE2 (vector double) intrinsic operations require VSX - instructions, but some operations may need only VMX - instructions. This also true for SSE2 scalar doubles as they - imply that "other half" of the vector remains unchanged or set - to zeros. The VSX scalar operations leave ther "other half" - undefined, and require additional merge operations. - Some conversions (to/from integer) need the direct register - transfer instructions from POWER8 for best performance. - So we test for arch_2_07. */ - if ( __builtin_cpu_supports ("arch_2_07") ) - { - do_test (); + do_test (); #ifdef DEBUG - printf ("PASSED\n"); + printf ("PASSED\n"); #endif - } -#ifdef DEBUG - else - printf ("SKIPPED\n"); -#endif -#endif /* __BUILTIN_CPU_SUPPORTS__ */ return 0; } -#endif Index: gcc/testsuite/gcc.target/powerpc/sse3-check.h =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/sse3-check.h b/trunk/gcc/testsuite/gcc.target/powerpc/sse3-check.h --- a/trunk/gcc/testsuite/gcc.target/powerpc/sse3-check.h (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/sse3-check.h (working copy) @@ -20,24 +20,9 @@ do_test (void) int main () { -#ifdef __BUILTIN_CPU_SUPPORTS__ - /* Most SSE intrinsic operations can be implemented via VMX - instructions, but some operations may be faster / simpler - using the POWER8 VSX instructions. This is especially true - when we are transferring / converting to / from __m64 types. - The direct register transfer instructions from POWER8 are - especially important. So we test for arch_2_07. */ - if (__builtin_cpu_supports ("arch_2_07")) - { - do_test (); + do_test (); #ifdef DEBUG - printf ("PASSED\n"); + printf ("PASSED\n"); #endif - } -#ifdef DEBUG - else - printf ("SKIPPED\n"); -#endif -#endif /* __BUILTIN_CPU_SUPPORTS__ */ return 0; } Index: gcc/testsuite/gcc.target/powerpc/ssse3-check.h =================================================================== diff --git a/trunk/gcc/testsuite/gcc.target/powerpc/ssse3-check.h b/trunk/gcc/testsuite/gcc.target/powerpc/ssse3-check.h --- a/trunk/gcc/testsuite/gcc.target/powerpc/ssse3-check.h (revision 266157) +++ b/trunk/gcc/testsuite/gcc.target/powerpc/ssse3-check.h (working copy) @@ -19,24 +19,9 @@ do_test (void) int main () { -#ifdef __BUILTIN_CPU_SUPPORTS__ - /* Most SSE intrinsic operations can be implemented via VMX - instructions, but some operations may be faster / simpler - using the POWER8 VSX instructions. This is especially true - when we are transferring / converting to / from __m64 types. - The direct register transfer instructions from POWER8 are - especially important. So we test for arch_2_07. */ - if (__builtin_cpu_supports ("arch_2_07")) - { - do_test (); + do_test (); #ifdef DEBUG - printf ("PASSED\n"); + printf ("PASSED\n"); #endif - } -#ifdef DEBUG - else - printf ("SKIPPED\n"); -#endif -#endif /* __BUILTIN_CPU_SUPPORTS__ */ return 0; }