From patchwork Fri May 12 17:52:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 1780747 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=G+vh5cMg; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QHxF33RGbz20KG for ; Sat, 13 May 2023 03:52:54 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E20EE3856262 for ; Fri, 12 May 2023 17:52:51 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E20EE3856262 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683913971; bh=T5mIzsHsc/1ZFexksC4W5Ac3yhHPO061gdhsFBDURUg=; h=Date:Subject:To:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=G+vh5cMgaWS0K+1JOUNvUf9yJTvAwiHZlqLr0R5MdWQ8zxIr4BW6wFCFdYn9JH/LV dc5GUEeYMuNKRpSsAp2rY0rF5hNdR9CrBFCgn513aoliZpzS7uDhjem9TsZMjVtk6b oq0mWJgmUy4r6E42FNECD+B5YMHH4k6uC+lcBePA= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com [IPv6:2607:f8b0:4864:20::f31]) by sourceware.org (Postfix) with ESMTPS id B86A03858C54 for ; Fri, 12 May 2023 17:52:29 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org B86A03858C54 Received: by mail-qv1-xf31.google.com with SMTP id 6a1803df08f44-61aecee26feso47680956d6.2 for ; Fri, 12 May 2023 10:52:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683913948; x=1686505948; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=T5mIzsHsc/1ZFexksC4W5Ac3yhHPO061gdhsFBDURUg=; b=TJCMglVXYY9aE+SzV5/1p1tMJ171nvdvUuSB84w4H2a47GkPjQ6mBEUT0CuT7iO5YN 0qFxIJKmbFlnDV+67r5sUA6030+MbrCu/chbyNlNn2rEFnvjyLh86uwR9W3I/JR2r+j9 bPfhc2u+egGYxbRpoZ+ytbv5+ElijkBihOb767dcs1KQFCJyhHqvzzgW/2JMl0WL2ZVS jmNXop8Qwh4f9xybf+L6GMch88W2HIiDYh425RfWAkGdOxf5+LrGK9ApoJC4XYwBtzWY +bJYy4er8TiN7uo/fvIU7URbpXILnf/BjY7VAwsCzeRq3TmYKQ1uZ0gcgkRx75aSPg4x +MOA== X-Gm-Message-State: AC+VfDyMXTtcrxru7vLFYz8YgBD+EVU47IhlN7QTL/Sy0vKmPUwb5pQq lB1fWHXosNU9R89SBCq5cQL0apBABB8Jqutdc5uUJZ4fCmT9lA== X-Google-Smtp-Source: ACHHUZ52Wp3thPpDFX8L6hUYDMhqGjfUSSYZlXpfjAVz7fWEOVb6KVaDtUfxH5tOOSSTpG7vA9LUd6Qws5FkrXP14D8= X-Received: by 2002:a05:6214:62b:b0:621:431e:5409 with SMTP id a11-20020a056214062b00b00621431e5409mr25758337qvx.16.1683913948581; Fri, 12 May 2023 10:52:28 -0700 (PDT) MIME-Version: 1.0 Date: Fri, 12 May 2023 19:52:17 +0200 Message-ID: Subject: [PATCH] i386: Cleanup ix86_expand_vecop_qihi{,2} To: "gcc-patches@gcc.gnu.org" X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Uros Bizjak via Gcc-patches From: Uros Bizjak Reply-To: Uros Bizjak Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" Some cleanups while looking at these two functions. gcc/ChangeLog: * config/i386/i386-expand.cc (ix86_expand_vecop_qihi2): Also reject ymm instructions for TARGET_PREFER_AVX128. Use generic gen_extend_insn to generate zero/sign extension instructions. Fix comments. (ix86_expand_vecop_qihi): Initialize interleave functions for MULT code only. Fix comments. Bootstrapped and regression tested on x86_64-linux-gnu {,-m32}. Pushed to master. Uros. diff --git a/gcc/config/i386/i386-expand.cc b/gcc/config/i386/i386-expand.cc index 634fe61ba79..8a869eb3b30 100644 --- a/gcc/config/i386/i386-expand.cc +++ b/gcc/config/i386/i386-expand.cc @@ -23122,12 +23122,11 @@ ix86_expand_vecop_qihi2 (enum rtx_code code, rtx dest, rtx op1, rtx op2) { machine_mode himode, qimode = GET_MODE (dest); rtx hop1, hop2, hdest; - rtx (*gen_extend)(rtx, rtx); rtx (*gen_truncate)(rtx, rtx); bool uns_p = (code == ASHIFTRT) ? false : true; - /* There's no V64HImode multiplication instruction. */ - if (qimode == E_V64QImode) + /* There are no V64HImode instructions. */ + if (qimode == V64QImode) return false; /* vpmovwb only available under AVX512BW. */ @@ -23136,26 +23135,24 @@ ix86_expand_vecop_qihi2 (enum rtx_code code, rtx dest, rtx op1, rtx op2) if ((qimode == V8QImode || qimode == V16QImode) && !TARGET_AVX512VL) return false; - /* Not generate zmm instruction when prefer 128/256 bit vector width. */ - if (qimode == V32QImode - && (TARGET_PREFER_AVX128 || TARGET_PREFER_AVX256)) + /* Do not generate ymm/zmm instructions when + target prefers 128/256 bit vector width. */ + if ((qimode == V16QImode && TARGET_PREFER_AVX128) + || (qimode == V32QImode && TARGET_PREFER_AVX256)) return false; switch (qimode) { case E_V8QImode: himode = V8HImode; - gen_extend = uns_p ? gen_zero_extendv8qiv8hi2 : gen_extendv8qiv8hi2; gen_truncate = gen_truncv8hiv8qi2; break; case E_V16QImode: himode = V16HImode; - gen_extend = uns_p ? gen_zero_extendv16qiv16hi2 : gen_extendv16qiv16hi2; gen_truncate = gen_truncv16hiv16qi2; break; case E_V32QImode: himode = V32HImode; - gen_extend = uns_p ? gen_zero_extendv32qiv32hi2 : gen_extendv32qiv32hi2; gen_truncate = gen_truncv32hiv32qi2; break; default: @@ -23165,8 +23162,8 @@ ix86_expand_vecop_qihi2 (enum rtx_code code, rtx dest, rtx op1, rtx op2) hop1 = gen_reg_rtx (himode); hop2 = gen_reg_rtx (himode); hdest = gen_reg_rtx (himode); - emit_insn (gen_extend (hop1, op1)); - emit_insn (gen_extend (hop2, op2)); + emit_insn (gen_extend_insn (hop1, op1, himode, qimode, uns_p)); + emit_insn (gen_extend_insn (hop2, op2, himode, qimode, uns_p)); emit_insn (gen_rtx_SET (hdest, simplify_gen_binary (code, himode, hop1, hop2))); emit_insn (gen_truncate (dest, hdest)); @@ -23285,8 +23282,9 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) rtx (*gen_ih) (rtx, rtx, rtx); rtx op1_l, op1_h, op2_l, op2_h, res_l, res_h; struct expand_vec_perm_d d; - bool ok, full_interleave; - bool uns_p = false; + bool full_interleave = true; + bool uns_p = true; + bool ok; int i; if (CONST_INT_P (op2) @@ -23303,18 +23301,12 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) { case E_V16QImode: himode = V8HImode; - gen_il = gen_vec_interleave_lowv16qi; - gen_ih = gen_vec_interleave_highv16qi; break; case E_V32QImode: himode = V16HImode; - gen_il = gen_avx2_interleave_lowv32qi; - gen_ih = gen_avx2_interleave_highv32qi; break; case E_V64QImode: himode = V32HImode; - gen_il = gen_avx512bw_interleave_lowv64qi; - gen_ih = gen_avx512bw_interleave_highv64qi; break; default: gcc_unreachable (); @@ -23327,6 +23319,26 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) each word. We don't care what goes into the high byte of each word. Rather than trying to get zero in there, most convenient is to let it be a copy of the low byte. */ + switch (qimode) + { + case E_V16QImode: + gen_il = gen_vec_interleave_lowv16qi; + gen_ih = gen_vec_interleave_highv16qi; + break; + case E_V32QImode: + gen_il = gen_avx2_interleave_lowv32qi; + gen_ih = gen_avx2_interleave_highv32qi; + full_interleave = false; + break; + case E_V64QImode: + gen_il = gen_avx512bw_interleave_lowv64qi; + gen_ih = gen_avx512bw_interleave_highv64qi; + full_interleave = false; + break; + default: + gcc_unreachable (); + } + op2_l = gen_reg_rtx (qimode); op2_h = gen_reg_rtx (qimode); emit_insn (gen_il (op2_l, op2, op2)); @@ -23336,14 +23348,13 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) op1_h = gen_reg_rtx (qimode); emit_insn (gen_il (op1_l, op1, op1)); emit_insn (gen_ih (op1_h, op1, op1)); - full_interleave = qimode == V16QImode; break; + case ASHIFTRT: + uns_p = false; + /* FALLTHRU */ case ASHIFT: case LSHIFTRT: - uns_p = true; - /* FALLTHRU */ - case ASHIFTRT: op1_l = gen_reg_rtx (himode); op1_h = gen_reg_rtx (himode); ix86_expand_sse_unpack (op1_l, op1, uns_p, false); @@ -23360,16 +23371,15 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) else op2_l = op2_h = op2; - full_interleave = true; break; default: gcc_unreachable (); } - /* Perform vashr/vlshr/vashl. */ if (code != MULT && GET_MODE_CLASS (GET_MODE (op2)) == MODE_VECTOR_INT) { + /* Expand vashr/vlshr/vashl. */ res_l = gen_reg_rtx (himode); res_h = gen_reg_rtx (himode); emit_insn (gen_rtx_SET (res_l, @@ -23379,9 +23389,9 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) simplify_gen_binary (code, himode, op1_h, op2_h))); } - /* Performance mult/ashr/lshr/ashl. */ else { + /* Expand mult/ashr/lshr/ashl. */ res_l = expand_simple_binop (himode, code, op1_l, op2_l, NULL_RTX, 1, OPTAB_DIRECT); res_h = expand_simple_binop (himode, code, op1_h, op2_h, NULL_RTX, @@ -23401,7 +23411,7 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) if (full_interleave) { - /* For SSE2, we used an full interleave, so the desired + /* We used the full interleave, the desired results are in the even elements. */ for (i = 0; i < d.nelt; ++i) d.perm[i] = i * 2;