From patchwork Thu Nov 20 15:03:17 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evgeny Stupachenko X-Patchwork-Id: 412739 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 694BB14017D for ; Fri, 21 Nov 2014 02:03:30 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; q=dns; s=default; b=kH3YpxRWwisAjqsy27 XwZvrhBoMArB6VhgZmR+CEIJB4dH/Wc8sL0YGkM9EaBtv17xGOlUvrubgKtP4XwK U7oVQdsuR6u76QdCZoW8bDz7QZ9rRi7682w/hOYsFDaq25+K4+OAjHPp8p2rS0ue 4uoHq8juEXn19m2e/Jg9VlpzM= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; s=default; bh=DcCiC+jYrvN4Vc+iMk5EUSKE uxA=; b=DfWdmjJ/UVuwiGwZJdma8hHAVyZKIhqzykUqpBHC4OUJbWEDq0boREcd sa77DrdeBy2RnfgjARQYRa1K7SQq5bFbIzDvKv4ZveHBjjDVGV3B3n6uKlrn+Dm6 DhvvNpDlgXJ3xMWnHyR1HB6b4DdrKnpos2Kca3MSrdBdbpOrWdM= Received: (qmail 14296 invoked by alias); 20 Nov 2014 15:03:21 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 14283 invoked by uid 89); 20 Nov 2014 15:03:20 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-0.1 required=5.0 tests=AWL, BAYES_50, FREEMAIL_FROM, LIKELY_SPAM_BODY, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=no version=3.3.2 X-HELO: mail-ie0-f181.google.com Received: from mail-ie0-f181.google.com (HELO mail-ie0-f181.google.com) (209.85.223.181) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Thu, 20 Nov 2014 15:03:19 +0000 Received: by mail-ie0-f181.google.com with SMTP id tp5so2867075ieb.40 for ; Thu, 20 Nov 2014 07:03:17 -0800 (PST) MIME-Version: 1.0 X-Received: by 10.50.143.73 with SMTP id sc9mr16975734igb.27.1416495797373; Thu, 20 Nov 2014 07:03:17 -0800 (PST) Received: by 10.107.135.82 with HTTP; Thu, 20 Nov 2014 07:03:17 -0800 (PST) In-Reply-To: <546DFB01.6010605@redhat.com> References: <546DFB01.6010605@redhat.com> Date: Thu, 20 Nov 2014 18:03:17 +0300 Message-ID: Subject: Re: [PATCH x86, PR60451] Expand even/odd permutation using pack insn. From: Evgeny Stupachenko To: Richard Henderson Cc: GCC Patches , Uros Bizjak X-IsSubscribed: yes Good point! "gen_shift" also requires only SSE2. That way we can optimize out interleave sequence for V16QI mode in expand_vec_perm_even_odd_1. Thanks! Evgeny Updated patch: On Thu, Nov 20, 2014 at 5:30 PM, Richard Henderson wrote: > On 11/20/2014 12:36 PM, Evgeny Stupachenko wrote: >> + /* Required for "pack". */ >> + if (!TARGET_SSE4_2 || d->one_operand_p) >> + return false; > > Why the SSE4_2 check here when... > >> + >> + /* Only V8HI, V16QI, V16HI and V32QI modes are more profitable than general >> + shuffles. */ >> + if (d->vmode == V8HImode) >> + { >> + c = 0xffff; >> + s = 16; >> + half_mode = V4SImode; >> + gen_and = gen_andv4si3; >> + gen_pack = gen_sse4_1_packusdw; > > ... it's SSE4_1 here, > >> + gen_shift = gen_lshrv4si3; >> + } >> + else if (d->vmode == V16QImode) >> + { >> + c = 0xff; >> + s = 8; >> + half_mode = V8HImode; >> + gen_and = gen_andv8hi3; >> + gen_pack = gen_sse2_packuswb; > > ... and SSE2 here? > > > > r~ diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c index 085eb54..054089b 100644 --- a/gcc/config/i386/i386.c +++ b/gcc/config/i386/i386.c @@ -48322,6 +48322,127 @@ expand_vec_perm_vpshufb2_vpermq_even_odd (struct expand_vec_perm_d *d) return true; } +/* A subroutine of expand_vec_perm_even_odd_1. Implement extract-even + and extract-odd permutations of two V16QI, V8HI, V16HI or V32QI operands + with two "and" and "pack" or two "shift" and "pack" insns. We should + have already failed all two instruction sequences. */ + +static bool +expand_vec_perm_even_odd_pack (struct expand_vec_perm_d *d) +{ + rtx op, dop0, dop1, t, rperm[16]; + unsigned i, odd, c, s, nelt = d->nelt; + bool end_perm = false; + machine_mode half_mode; + rtx (*gen_and) (rtx, rtx, rtx); + rtx (*gen_pack) (rtx, rtx, rtx); + rtx (*gen_shift) (rtx, rtx, rtx); + + if (d->one_operand_p) + return false; + + switch (d->vmode) + { + case V8HImode: + /* Required for "pack". */ + if (!TARGET_SSE4_1) + return false; + c = 0xffff; + s = 16; + half_mode = V4SImode; + gen_and = gen_andv4si3; + gen_pack = gen_sse4_1_packusdw; + gen_shift = gen_lshrv4si3; + break; + case V16QImode: + /* No check as all instructions are SSE2. */ + c = 0xff; + s = 8; + half_mode = V8HImode; + gen_and = gen_andv8hi3; + gen_pack = gen_sse2_packuswb; + gen_shift = gen_lshrv8hi3; + break; + case V16HImode: + if (!TARGET_AVX2) + return false; + c = 0xffff; + s = 16; + half_mode = V8SImode; + gen_and = gen_andv8si3; + gen_pack = gen_avx2_packusdw; + gen_shift = gen_lshrv8si3; + end_perm = true; + break; + case V32QImode: + if (!TARGET_AVX2) + return false; + c = 0xff; + s = 8; + half_mode = V16HImode; + gen_and = gen_andv16hi3; + gen_pack = gen_avx2_packuswb; + gen_shift = gen_lshrv16hi3; + end_perm = true; + break; + default: + /* Only V8HI, V16QI, V16HI and V32QI modes are more profitable than + general shuffles. */ + return false; + } + + /* Check that permutation is even or odd. */ + odd = d->perm[0]; + if (odd > 1) + return false; + + for (i = 1; i < nelt; ++i) + if (d->perm[i] != 2 * i + odd) + return false; + + if (d->testing_p) + return true; + + dop0 = gen_reg_rtx (half_mode); + dop1 = gen_reg_rtx (half_mode); + if (odd == 0) + { + for (i = 0; i < nelt / 2; i++) + rperm[i] = GEN_INT (c); + t = gen_rtx_CONST_VECTOR (half_mode, gen_rtvec_v (nelt / 2, rperm)); + t = force_reg (half_mode, t); + emit_insn (gen_and (dop0, t, gen_lowpart (half_mode, d->op0))); + emit_insn (gen_and (dop1, t, gen_lowpart (half_mode, d->op1))); + } + else + { + emit_insn (gen_shift (dop0, + gen_lowpart (half_mode, d->op0), + GEN_INT (s))); + emit_insn (gen_shift (dop1, + gen_lowpart (half_mode, d->op1), + GEN_INT (s))); + } + /* In AVX2 for 256 bit case we need to permute pack result. */ + if (TARGET_AVX2 && end_perm) + { + op = gen_reg_rtx (d->vmode); + t = gen_reg_rtx (V4DImode); + emit_insn (gen_pack (op, dop0, dop1)); + emit_insn (gen_avx2_permv4di_1 (t, + gen_lowpart (V4DImode, op), + const0_rtx, + const2_rtx, + const1_rtx, + GEN_INT (3))); + emit_move_insn (d->target, gen_lowpart (d->vmode, t)); + } + else + emit_insn (gen_pack (d->target, dop0, dop1)); + + return true; +} + /* A subroutine of ix86_expand_vec_perm_builtin_1. Implement extract-even and extract-odd permutations. */ @@ -48393,7 +48514,9 @@ expand_vec_perm_even_odd_1 (struct expand_vec_perm_d *d, unsigned odd) gcc_unreachable (); case V8HImode: - if (TARGET_SSSE3 && !TARGET_SLOW_PSHUFB) + if (TARGET_SSE4_1) + return expand_vec_perm_even_odd_pack (d); + else if (TARGET_SSSE3 && !TARGET_SLOW_PSHUFB) return expand_vec_perm_pshufb2 (d); else { @@ -48416,32 +48539,11 @@ expand_vec_perm_even_odd_1 (struct expand_vec_perm_d *d, unsigned odd) break; case V16QImode: - if (TARGET_SSSE3 && !TARGET_SLOW_PSHUFB) - return expand_vec_perm_pshufb2 (d); - else - { - if (d->testing_p) - break; - t1 = gen_reg_rtx (V16QImode); - t2 = gen_reg_rtx (V16QImode); - t3 = gen_reg_rtx (V16QImode); - emit_insn (gen_vec_interleave_highv16qi (t1, d->op0, d->op1)); - emit_insn (gen_vec_interleave_lowv16qi (d->target, d->op0, d->op1)); - emit_insn (gen_vec_interleave_highv16qi (t2, d->target, t1)); - emit_insn (gen_vec_interleave_lowv16qi (d->target, d->target, t1)); - emit_insn (gen_vec_interleave_highv16qi (t3, d->target, t2)); - emit_insn (gen_vec_interleave_lowv16qi (d->target, d->target, t2)); - if (odd) - t3 = gen_vec_interleave_highv16qi (d->target, d->target, t3); - else - t3 = gen_vec_interleave_lowv16qi (d->target, d->target, t3); - emit_insn (t3); - } - break; + return expand_vec_perm_even_odd_pack (d); case V16HImode: case V32QImode: - return expand_vec_perm_vpshufb2_vpermq_even_odd (d); + return expand_vec_perm_even_odd_pack (d); case V4DImode: if (!TARGET_AVX2) @@ -48814,6 +48916,9 @@ ix86_expand_vec_perm_const_1 (struct expand_vec_perm_d *d) /* Try sequences of three instructions. */ + if (expand_vec_perm_even_odd_pack (d)) + return true; + if (expand_vec_perm_2vperm2f128_vshuf (d)) return true; diff --git a/gcc/testsuite/gcc.target/i386/pr60451.c b/gcc/testsuite/gcc.target/i386/pr60451.c new file mode 100644 index 0000000..c600f4a --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/pr60451.c @@ -0,0 +1,14 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target sse2 } */ +/* { dg-options "-O2 -ftree-vectorize -msse2" } */ + +void +foo (unsigned char *a, unsigned char *b, unsigned char *c, int size) +{ + int i; + + for (i = 0; i < size; i++) + a[i] = (unsigned char) ((unsigned int)1 + b[i] * c[i] * 117); +} + +/* { dg-final { scan-assembler "packuswb|vpunpck" } } */