From patchwork Fri Apr 4 20:45:13 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Schmidt X-Patchwork-Id: 337117 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id E80B114007E for ; Sat, 5 Apr 2014 09:31:59 +1100 (EST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:subject:from:to:cc:date:in-reply-to:references :content-type:content-transfer-encoding:mime-version; q=dns; s= default; b=TBKf0vezZ2flTknYSzA2DFdsWUldII6f2npPEICosUOq8WR7xIWC+ vjoCIrQ6XnUhCgK8UpaMvBLCbXYTqiCpm73f2vQ4TIB/WJiap26aY5GsOXNo3jkl NPu2b8aZMSqmN95ffx927u+2sL+A1zsaJlQxvRme8fqVHFIHmpGXCM= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:subject:from:to:cc:date:in-reply-to:references :content-type:content-transfer-encoding:mime-version; s=default; bh=MajbyNO0ZpOwgqlIjXxFVAOKoqw=; b=Bd9A08nVGYK+qTW0jicsW+uVVr99 7nlEi+LPKPhJxshN0KRqaBftZgJju5F0TksKIAs8JJm8ZjGDigCsgCOQxA7mNVRK NGAvpaY6od/UFpuuVQC/iEsKMy6DaHmctjkKcHMUnem4s4wOOXKKe6ngOxn6qJ5I ERUIN4zCUwGXvwE= Received: (qmail 19259 invoked by alias); 4 Apr 2014 20:45:17 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 19248 invoked by uid 89); 4 Apr 2014 20:45:16 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.8 required=5.0 tests=AWL, BAYES_00, RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: e34.co.us.ibm.com Received: from e34.co.us.ibm.com (HELO e34.co.us.ibm.com) (32.97.110.152) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-SHA encrypted) ESMTPS; Fri, 04 Apr 2014 20:45:15 +0000 Received: from /spool/local by e34.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 4 Apr 2014 14:45:14 -0600 Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e34.co.us.ibm.com (192.168.1.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 4 Apr 2014 14:45:12 -0600 Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 289673E40026 for ; Fri, 4 Apr 2014 14:45:12 -0600 (MDT) Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by b03cxnp07029.gho.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id s34IgE7F10944796 for ; Fri, 4 Apr 2014 20:42:14 +0200 Received: from d03av03.boulder.ibm.com (localhost [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s34KjBkt029752 for ; Fri, 4 Apr 2014 14:45:11 -0600 Received: from [9.50.20.76] (dyn9050020076.mts.ibm.com [9.50.20.76] (may be forged)) by d03av03.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id s34KjAU2029634; Fri, 4 Apr 2014 14:45:10 -0600 Message-ID: <1396644313.5401.65.camel@gnopaine> Subject: [4.8, PATCH, rs6000] (Re: [PATCH, rs6000] More efficient vector permute for little endian) From: Bill Schmidt To: gcc-patches@gcc.gnu.org Cc: dje.gcc@gmail.com, rth@redhat.com Date: Fri, 04 Apr 2014 15:45:13 -0500 In-Reply-To: <1395365894.3599.35.camel@gnopaine> References: <1395365894.3599.35.camel@gnopaine> Mime-Version: 1.0 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14040420-1542-0000-0000-000000DEC93B X-IsSubscribed: yes On Thu, 2014-03-20 at 20:38 -0500, Bill Schmidt wrote: > The original workaround for vector permute on a little endian platform > includes subtracting each element of the permute control vector from 31. > Because the upper 3 bits of each element are unimportant, this was > implemented as subtracting the whole vector from a splat of -1. On > reflection this can be done more efficiently with a vector nor > operation. This patch makes that change. This patch was approved and committed to trunk and to the IBM 4.8 branch. I would like approval to commit it to the FSF 4.8 branch as well. Per Richard Henderson's previous comment, I have changed the patch slightly to avoid the use of emit_move_insn. Bootstrapped and tested on powerpc64le-unknown-linux-gnu. Previous version burned in on trunk and IBM 4.8 branch for about two weeks. Is this ok for FSF 4.8? Thanks, Bill 2014-04-04 Bill Schmidt * config/rs6000/rs6000.c (rs6000_expand_vector_set): Generate a pattern for vector nor instead of subtract from splat(-1). (altivec_expand_vec_perm_const_le): Likewise. Index: gcc/config/rs6000/rs6000.c =================================================================== --- gcc/config/rs6000/rs6000.c (revision 209122) +++ gcc/config/rs6000/rs6000.c (working copy) @@ -5621,12 +5621,10 @@ rs6000_expand_vector_set (rtx target, rtx val, int else { /* Invert selector. */ - rtx splat = gen_rtx_VEC_DUPLICATE (V16QImode, - gen_rtx_CONST_INT (QImode, -1)); + rtx notx = gen_rtx_NOT (V16QImode, force_reg (V16QImode, x)); + rtx andx = gen_rtx_AND (V16QImode, notx, notx); rtx tmp = gen_reg_rtx (V16QImode); - emit_move_insn (tmp, splat); - x = gen_rtx_MINUS (V16QImode, tmp, force_reg (V16QImode, x)); - emit_move_insn (tmp, x); + emit_insn (gen_rtx_SET (VOIDmode, tmp, andx)); /* Permute with operands reversed and adjusted selector. */ x = gen_rtx_UNSPEC (mode, gen_rtvec (3, reg, target, tmp), @@ -30335,18 +30333,18 @@ altivec_expand_vec_perm_const_le (rtx operands[4]) /* Similarly to altivec_expand_vec_perm_const_le, we must adjust the permute control vector. But here it's not a constant, so we must - generate a vector splat/subtract to do the adjustment. */ + generate a vector NOR to do the adjustment. */ void altivec_expand_vec_perm_le (rtx operands[4]) { - rtx splat, unspec; + rtx notx, andx, unspec; rtx target = operands[0]; rtx op0 = operands[1]; rtx op1 = operands[2]; rtx sel = operands[3]; rtx tmp = target; - rtx splatreg = gen_reg_rtx (V16QImode); + rtx norreg = gen_reg_rtx (V16QImode); enum machine_mode mode = GET_MODE (target); /* Get everything in regs so the pattern matches. */ @@ -30359,18 +30357,14 @@ altivec_expand_vec_perm_le (rtx operands[4]) if (!REG_P (target)) tmp = gen_reg_rtx (mode); - /* SEL = splat(31) - SEL. */ - /* We want to subtract from 31, but we can't vspltisb 31 since - it's out of range. -1 works as well because only the low-order - five bits of the permute control vector elements are used. */ - splat = gen_rtx_VEC_DUPLICATE (V16QImode, - gen_rtx_CONST_INT (QImode, -1)); - emit_move_insn (splatreg, splat); - sel = gen_rtx_MINUS (V16QImode, splatreg, sel); - emit_move_insn (splatreg, sel); + /* Invert the selector with a VNOR. */ + notx = gen_rtx_NOT (V16QImode, sel); + andx = gen_rtx_AND (V16QImode, notx, notx); + emit_insn (gen_rtx_SET (VOIDmode, norreg, andx)); /* Permute with operands reversed and adjusted selector. */ - unspec = gen_rtx_UNSPEC (mode, gen_rtvec (3, op1, op0, splatreg), UNSPEC_VPERM); + unspec = gen_rtx_UNSPEC (mode, gen_rtvec (3, op1, op0, norreg), + UNSPEC_VPERM); /* Copy into target, possibly by way of a register. */ if (!REG_P (target))