From patchwork Thu Mar 2 10:45:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Koenig X-Patchwork-Id: 734550 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3vYprv5WrMz9s7m for ; Thu, 2 Mar 2017 21:46:23 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="c3zqKy8U"; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:cc:from:message-id:date:mime-version :in-reply-to:content-type; q=dns; s=default; b=AQ794pKi0juYDPWET XfMDLJ6Z3l9WgGhmKLUJ64KO2vbP4cej0v4jr8dVfIFkxFt4MxmGiF9XleZpM/C2 +XB8sO0G1iX8PmvrLXQySQ7ajn1MO2l4pcP9L3XLMi7m/NDj4kpm3tTDTUEtVbJg RWtAlMOc9ZHW+TXpoOFDkh7444= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :subject:to:references:cc:from:message-id:date:mime-version :in-reply-to:content-type; s=default; bh=PMqb9rSfBjrtV7PBqBCZ2HK /xdY=; b=c3zqKy8UUiOGGY43kDmp9YpPEZUs4Ub9WBUOv3p/g/bjvOIcpHe2HXe ld5hX6wlvWR4ANzczbnGRL3HF0jm2rpCAQOR4NrRoJkUZsTqDP8eBnSjZYQOVwjr hqTpFYUjClrmZwp3boSvItPIAMfzjludfZGflyo0iSBODMX1px0s= Received: (qmail 31367 invoked by alias); 2 Mar 2017 10:46:11 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 31345 invoked by uid 89); 2 Mar 2017 10:46:10 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-11.1 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, SPF_PASS autolearn=ham version=3.3.2 spammy= X-Spam-User: qpsmtpd, 2 recipients X-HELO: cc-smtpout2.netcologne.de Received: from cc-smtpout2.netcologne.de (HELO cc-smtpout2.netcologne.de) (89.1.8.212) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 02 Mar 2017 10:46:06 +0000 Received: from cc-smtpin3.netcologne.de (cc-smtpin3.netcologne.de [89.1.8.203]) by cc-smtpout2.netcologne.de (Postfix) with ESMTP id 00CF9124D3; Thu, 2 Mar 2017 11:46:03 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by cc-smtpin3.netcologne.de (Postfix) with ESMTP id E769111D74; Thu, 2 Mar 2017 11:46:02 +0100 (CET) Received: from [78.35.164.68] (helo=cc-smtpin3.netcologne.de) by localhost with ESMTP (eXpurgate 4.1.9) (envelope-from ) id 58b7f7ea-0242-7f0000012729-7f000001dbaf-1 for ; Thu, 02 Mar 2017 11:46:02 +0100 Received: from [192.168.178.20] (xdsl-78-35-164-68.netcologne.de [78.35.164.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by cc-smtpin3.netcologne.de (Postfix) with ESMTPSA; Thu, 2 Mar 2017 11:46:00 +0100 (CET) Subject: Re: [patch, fortran] Enable FMA for AVX2 and AVX512F for matmul To: Jakub Jelinek References: <6ecdb93d-3aef-746c-8ca0-ed6b78eb12d4@netcologne.de> <20170302084337.GI1849@tucnak> <1825c548-1d6c-b10f-fc59-07babf432834@netcologne.de> <20170302090811.GJ1849@tucnak> Cc: "fortran@gcc.gnu.org" , gcc-patches From: Thomas Koenig Message-ID: Date: Thu, 2 Mar 2017 11:45:59 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20170302090811.GJ1849@tucnak> Here's the updated version, which just uses FMA for AVX2. OK for trunk? Regards Thomas 2017-03-01 Thomas Koenig PR fortran/78379 * m4/matmul.m4: (matmul_'rtype_code`_avx2): Also generate for reals. Add fma to target options. (matmul_'rtype_code`): Call AVX2 only if FMA is available. * generated/matmul_c10.c: Regenerated. * generated/matmul_c16.c: Regenerated. * generated/matmul_c4.c: Regenerated. * generated/matmul_c8.c: Regenerated. * generated/matmul_i1.c: Regenerated. * generated/matmul_i16.c: Regenerated. * generated/matmul_i2.c: Regenerated. * generated/matmul_i4.c: Regenerated. * generated/matmul_i8.c: Regenerated. * generated/matmul_r10.c: Regenerated. * generated/matmul_r16.c: Regenerated. * generated/matmul_r4.c: Regenerated. * generated/matmul_r8.c: Regenerated. Index: generated/matmul_c10.c =================================================================== --- generated/matmul_c10.c (Revision 245760) +++ generated/matmul_c10.c (Arbeitskopie) @@ -74,9 +74,6 @@ extern void matmul_c10 (gfc_array_c10 * const rest int blas_limit, blas_call gemm); export_proto(matmul_c10); - - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -628,7 +625,7 @@ matmul_c10_avx (gfc_array_c10 * const restrict ret static void matmul_c10_avx2 (gfc_array_c10 * const restrict retarray, gfc_array_c10 * const restrict a, gfc_array_c10 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_c10_avx2 (gfc_array_c10 * const restrict retarray, gfc_array_c10 * const restrict a, gfc_array_c10 * const restrict b, int try_blas, @@ -2277,7 +2274,8 @@ void matmul_c10 (gfc_array_c10 * const restrict re #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_c10_avx2; goto tailcall; Index: generated/matmul_c16.c =================================================================== --- generated/matmul_c16.c (Revision 245760) +++ generated/matmul_c16.c (Arbeitskopie) @@ -74,9 +74,6 @@ extern void matmul_c16 (gfc_array_c16 * const rest int blas_limit, blas_call gemm); export_proto(matmul_c16); - - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -628,7 +625,7 @@ matmul_c16_avx (gfc_array_c16 * const restrict ret static void matmul_c16_avx2 (gfc_array_c16 * const restrict retarray, gfc_array_c16 * const restrict a, gfc_array_c16 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_c16_avx2 (gfc_array_c16 * const restrict retarray, gfc_array_c16 * const restrict a, gfc_array_c16 * const restrict b, int try_blas, @@ -2277,7 +2274,8 @@ void matmul_c16 (gfc_array_c16 * const restrict re #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_c16_avx2; goto tailcall; Index: generated/matmul_c4.c =================================================================== --- generated/matmul_c4.c (Revision 245760) +++ generated/matmul_c4.c (Arbeitskopie) @@ -74,9 +74,6 @@ extern void matmul_c4 (gfc_array_c4 * const restri int blas_limit, blas_call gemm); export_proto(matmul_c4); - - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -628,7 +625,7 @@ matmul_c4_avx (gfc_array_c4 * const restrict retar static void matmul_c4_avx2 (gfc_array_c4 * const restrict retarray, gfc_array_c4 * const restrict a, gfc_array_c4 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_c4_avx2 (gfc_array_c4 * const restrict retarray, gfc_array_c4 * const restrict a, gfc_array_c4 * const restrict b, int try_blas, @@ -2277,7 +2274,8 @@ void matmul_c4 (gfc_array_c4 * const restrict reta #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_c4_avx2; goto tailcall; Index: generated/matmul_c8.c =================================================================== --- generated/matmul_c8.c (Revision 245760) +++ generated/matmul_c8.c (Arbeitskopie) @@ -74,9 +74,6 @@ extern void matmul_c8 (gfc_array_c8 * const restri int blas_limit, blas_call gemm); export_proto(matmul_c8); - - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -628,7 +625,7 @@ matmul_c8_avx (gfc_array_c8 * const restrict retar static void matmul_c8_avx2 (gfc_array_c8 * const restrict retarray, gfc_array_c8 * const restrict a, gfc_array_c8 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_c8_avx2 (gfc_array_c8 * const restrict retarray, gfc_array_c8 * const restrict a, gfc_array_c8 * const restrict b, int try_blas, @@ -2277,7 +2274,8 @@ void matmul_c8 (gfc_array_c8 * const restrict reta #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_c8_avx2; goto tailcall; Index: generated/matmul_i1.c =================================================================== --- generated/matmul_i1.c (Revision 245760) +++ generated/matmul_i1.c (Arbeitskopie) @@ -74,9 +74,6 @@ extern void matmul_i1 (gfc_array_i1 * const restri int blas_limit, blas_call gemm); export_proto(matmul_i1); - - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -628,7 +625,7 @@ matmul_i1_avx (gfc_array_i1 * const restrict retar static void matmul_i1_avx2 (gfc_array_i1 * const restrict retarray, gfc_array_i1 * const restrict a, gfc_array_i1 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_i1_avx2 (gfc_array_i1 * const restrict retarray, gfc_array_i1 * const restrict a, gfc_array_i1 * const restrict b, int try_blas, @@ -2277,7 +2274,8 @@ void matmul_i1 (gfc_array_i1 * const restrict reta #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_i1_avx2; goto tailcall; Index: generated/matmul_i16.c =================================================================== --- generated/matmul_i16.c (Revision 245760) +++ generated/matmul_i16.c (Arbeitskopie) @@ -74,9 +74,6 @@ extern void matmul_i16 (gfc_array_i16 * const rest int blas_limit, blas_call gemm); export_proto(matmul_i16); - - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -628,7 +625,7 @@ matmul_i16_avx (gfc_array_i16 * const restrict ret static void matmul_i16_avx2 (gfc_array_i16 * const restrict retarray, gfc_array_i16 * const restrict a, gfc_array_i16 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_i16_avx2 (gfc_array_i16 * const restrict retarray, gfc_array_i16 * const restrict a, gfc_array_i16 * const restrict b, int try_blas, @@ -2277,7 +2274,8 @@ void matmul_i16 (gfc_array_i16 * const restrict re #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_i16_avx2; goto tailcall; Index: generated/matmul_i2.c =================================================================== --- generated/matmul_i2.c (Revision 245760) +++ generated/matmul_i2.c (Arbeitskopie) @@ -74,9 +74,6 @@ extern void matmul_i2 (gfc_array_i2 * const restri int blas_limit, blas_call gemm); export_proto(matmul_i2); - - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -628,7 +625,7 @@ matmul_i2_avx (gfc_array_i2 * const restrict retar static void matmul_i2_avx2 (gfc_array_i2 * const restrict retarray, gfc_array_i2 * const restrict a, gfc_array_i2 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_i2_avx2 (gfc_array_i2 * const restrict retarray, gfc_array_i2 * const restrict a, gfc_array_i2 * const restrict b, int try_blas, @@ -2277,7 +2274,8 @@ void matmul_i2 (gfc_array_i2 * const restrict reta #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_i2_avx2; goto tailcall; Index: generated/matmul_i4.c =================================================================== --- generated/matmul_i4.c (Revision 245760) +++ generated/matmul_i4.c (Arbeitskopie) @@ -74,9 +74,6 @@ extern void matmul_i4 (gfc_array_i4 * const restri int blas_limit, blas_call gemm); export_proto(matmul_i4); - - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -628,7 +625,7 @@ matmul_i4_avx (gfc_array_i4 * const restrict retar static void matmul_i4_avx2 (gfc_array_i4 * const restrict retarray, gfc_array_i4 * const restrict a, gfc_array_i4 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_i4_avx2 (gfc_array_i4 * const restrict retarray, gfc_array_i4 * const restrict a, gfc_array_i4 * const restrict b, int try_blas, @@ -2277,7 +2274,8 @@ void matmul_i4 (gfc_array_i4 * const restrict reta #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_i4_avx2; goto tailcall; Index: generated/matmul_i8.c =================================================================== --- generated/matmul_i8.c (Revision 245760) +++ generated/matmul_i8.c (Arbeitskopie) @@ -74,9 +74,6 @@ extern void matmul_i8 (gfc_array_i8 * const restri int blas_limit, blas_call gemm); export_proto(matmul_i8); - - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -628,7 +625,7 @@ matmul_i8_avx (gfc_array_i8 * const restrict retar static void matmul_i8_avx2 (gfc_array_i8 * const restrict retarray, gfc_array_i8 * const restrict a, gfc_array_i8 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_i8_avx2 (gfc_array_i8 * const restrict retarray, gfc_array_i8 * const restrict a, gfc_array_i8 * const restrict b, int try_blas, @@ -2277,7 +2274,8 @@ void matmul_i8 (gfc_array_i8 * const restrict reta #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_i8_avx2; goto tailcall; Index: generated/matmul_r10.c =================================================================== --- generated/matmul_r10.c (Revision 245760) +++ generated/matmul_r10.c (Arbeitskopie) @@ -74,13 +74,6 @@ extern void matmul_r10 (gfc_array_r10 * const rest int blas_limit, blas_call gemm); export_proto(matmul_r10); -#if defined(HAVE_AVX) && defined(HAVE_AVX2) -/* REAL types generate identical code for AVX and AVX2. Only generate - an AVX2 function if we are dealing with integer. */ -#undef HAVE_AVX2 -#endif - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -632,7 +625,7 @@ matmul_r10_avx (gfc_array_r10 * const restrict ret static void matmul_r10_avx2 (gfc_array_r10 * const restrict retarray, gfc_array_r10 * const restrict a, gfc_array_r10 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_r10_avx2 (gfc_array_r10 * const restrict retarray, gfc_array_r10 * const restrict a, gfc_array_r10 * const restrict b, int try_blas, @@ -2281,7 +2274,8 @@ void matmul_r10 (gfc_array_r10 * const restrict re #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_r10_avx2; goto tailcall; Index: generated/matmul_r16.c =================================================================== --- generated/matmul_r16.c (Revision 245760) +++ generated/matmul_r16.c (Arbeitskopie) @@ -74,13 +74,6 @@ extern void matmul_r16 (gfc_array_r16 * const rest int blas_limit, blas_call gemm); export_proto(matmul_r16); -#if defined(HAVE_AVX) && defined(HAVE_AVX2) -/* REAL types generate identical code for AVX and AVX2. Only generate - an AVX2 function if we are dealing with integer. */ -#undef HAVE_AVX2 -#endif - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -632,7 +625,7 @@ matmul_r16_avx (gfc_array_r16 * const restrict ret static void matmul_r16_avx2 (gfc_array_r16 * const restrict retarray, gfc_array_r16 * const restrict a, gfc_array_r16 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_r16_avx2 (gfc_array_r16 * const restrict retarray, gfc_array_r16 * const restrict a, gfc_array_r16 * const restrict b, int try_blas, @@ -2281,7 +2274,8 @@ void matmul_r16 (gfc_array_r16 * const restrict re #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_r16_avx2; goto tailcall; Index: generated/matmul_r4.c =================================================================== --- generated/matmul_r4.c (Revision 245760) +++ generated/matmul_r4.c (Arbeitskopie) @@ -74,13 +74,6 @@ extern void matmul_r4 (gfc_array_r4 * const restri int blas_limit, blas_call gemm); export_proto(matmul_r4); -#if defined(HAVE_AVX) && defined(HAVE_AVX2) -/* REAL types generate identical code for AVX and AVX2. Only generate - an AVX2 function if we are dealing with integer. */ -#undef HAVE_AVX2 -#endif - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -632,7 +625,7 @@ matmul_r4_avx (gfc_array_r4 * const restrict retar static void matmul_r4_avx2 (gfc_array_r4 * const restrict retarray, gfc_array_r4 * const restrict a, gfc_array_r4 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_r4_avx2 (gfc_array_r4 * const restrict retarray, gfc_array_r4 * const restrict a, gfc_array_r4 * const restrict b, int try_blas, @@ -2281,7 +2274,8 @@ void matmul_r4 (gfc_array_r4 * const restrict reta #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_r4_avx2; goto tailcall; Index: generated/matmul_r8.c =================================================================== --- generated/matmul_r8.c (Revision 245760) +++ generated/matmul_r8.c (Arbeitskopie) @@ -74,13 +74,6 @@ extern void matmul_r8 (gfc_array_r8 * const restri int blas_limit, blas_call gemm); export_proto(matmul_r8); -#if defined(HAVE_AVX) && defined(HAVE_AVX2) -/* REAL types generate identical code for AVX and AVX2. Only generate - an AVX2 function if we are dealing with integer. */ -#undef HAVE_AVX2 -#endif - - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -632,7 +625,7 @@ matmul_r8_avx (gfc_array_r8 * const restrict retar static void matmul_r8_avx2 (gfc_array_r8 * const restrict retarray, gfc_array_r8 * const restrict a, gfc_array_r8 * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static void matmul_r8_avx2 (gfc_array_r8 * const restrict retarray, gfc_array_r8 * const restrict a, gfc_array_r8 * const restrict b, int try_blas, @@ -2281,7 +2274,8 @@ void matmul_r8 (gfc_array_r8 * const restrict reta #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_r8_avx2; goto tailcall; Index: m4/matmul.m4 =================================================================== --- m4/matmul.m4 (Revision 245760) +++ m4/matmul.m4 (Arbeitskopie) @@ -75,14 +75,6 @@ extern void matmul_'rtype_code` ('rtype` * const r int blas_limit, blas_call gemm); export_proto(matmul_'rtype_code`); -'ifelse(rtype_letter,`r',dnl -`#if defined(HAVE_AVX) && defined(HAVE_AVX2) -/* REAL types generate identical code for AVX and AVX2. Only generate - an AVX2 function if we are dealing with integer. */ -#undef HAVE_AVX2 -#endif') -` - /* Put exhaustive list of possible architectures here here, ORed together. */ #if defined(HAVE_AVX) || defined(HAVE_AVX2) || defined(HAVE_AVX512F) @@ -101,7 +93,7 @@ static' include(matmul_internal.m4)dnl `static void 'matmul_name` ('rtype` * const restrict retarray, 'rtype` * const restrict a, 'rtype` * const restrict b, int try_blas, - int blas_limit, blas_call gemm) __attribute__((__target__("avx2"))); + int blas_limit, blas_call gemm) __attribute__((__target__("avx2,fma"))); static' include(matmul_internal.m4)dnl `#endif /* HAVE_AVX2 */ @@ -147,7 +139,8 @@ void matmul_'rtype_code` ('rtype` * const restrict #endif /* HAVE_AVX512F */ #ifdef HAVE_AVX2 - if (__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + if ((__cpu_model.__cpu_features[0] & (1 << FEATURE_AVX2)) + && (__cpu_model.__cpu_features[0] & (1 << FEATURE_FMA))) { matmul_p = matmul_'rtype_code`_avx2; goto tailcall;