From patchwork Wed Sep 26 05:03:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 974849 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-486384-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="nPZOTUeu"; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="lv9deNxV"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 42Km830XHBz9s8F for ; Wed, 26 Sep 2018 15:04:34 +1000 (AEST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:in-reply-to:references; q=dns; s= default; b=e+3OIC4wvZIdkpaeI5jImWj4FeRE4aOKj9F2Vtl1vPEyMsyp7fRw6 6T4O/wQOYjV4s6/gGxO6QxZ0xFuZOSFwTNFv8WHBsX0KUaAo7GxDTX3MyjDVsbhT r1ZUIQmRW0OFF5DS9A40Ghgt8ShrPqJmLCzktFwxz1dU6b7HC8NQWM= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:in-reply-to:references; s= default; bh=P+NuuxmHpoD9sTxtOBrVM5nCCZI=; b=nPZOTUeuTSUTVvqHq6DA GAEjt54E7d5Hkw8zBgLCiIdZghX5wI3YY0yTFnRY0+MS7NPg30ks+7Ymn8fU2zsR uhXR9NzYf/77gYG8MDU7RWGc2uErmOjfOxq3gS5v2QbC66D7bhJ1+a2cgFXBMxad FtiJR0sSmJwSQx9l0hLXoes= Received: (qmail 73532 invoked by alias); 26 Sep 2018 05:04:04 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 73380 invoked by uid 89); 26 Sep 2018 05:04:03 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-25.9 required=5.0 tests=AWL, BAYES_00, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=alli, RELEASE, modes X-HELO: mail-pf1-f179.google.com Received: from mail-pf1-f179.google.com (HELO mail-pf1-f179.google.com) (209.85.210.179) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 26 Sep 2018 05:04:01 +0000 Received: by mail-pf1-f179.google.com with SMTP id x26-v6so2730641pfn.4 for ; Tue, 25 Sep 2018 22:04:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XR+FBEI/W3q3PqG2Pkwq/BV2sCIW7vZz4IKZCWF0FHY=; b=lv9deNxVBU0ArJ92G4yK56Wd7KQh+dtmCwrIfi5XxN8WM/HELWq58IfwDFv+KGdUfi wWuPOToPzJSkC7zSQM2ng7aMr1TLJpo8s1l6hIalNdyXOfmqgRI6dlXLw28bmJNtWa5t tfniKtPvJD7QjXBP3pf4el8nF0yN9vSOf0H4k0gXAww+KSdBBe4qGtEYEg2t5O2i4+Al qQ5gonU40G21qpGuUTj3vArNKZhr1KmSLVLi6EGVkkqTf2zh57PGufDioD+Tp2WE74ri PSvDCNL7tvb3AMN3BJe8GORcSdNOtVTIswFt+j+LINMC/yhQvcB9gmn4Zgj5wGLS0HZO y4ow== Received: from cloudburst.twiddle.net (97-113-8-179.tukw.qwest.net. [97.113.8.179]) by smtp.gmail.com with ESMTPSA id j22-v6sm4954650pfh.45.2018.09.25.22.03.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Sep 2018 22:03:58 -0700 (PDT) From: rth7680@gmail.com To: gcc-patches@gcc.gnu.org Cc: ramana.radhakrishnan@arm.com, agraf@suse.de, matz@suse.de, Richard Henderson Subject: [PATCH, AArch64 02/11] aarch64: Improve cas generation Date: Tue, 25 Sep 2018 22:03:46 -0700 Message-Id: <20180926050355.32746-3-richard.henderson@linaro.org> In-Reply-To: <20180926050355.32746-1-richard.henderson@linaro.org> References: <20180926050355.32746-1-richard.henderson@linaro.org> From: Richard Henderson Do not zero-extend the input to the cas for subword operations; instead, use the appropriate zero-extending compare insns. Correct the predicates and constraints for immediate expected operand. * config/aarch64/aarch64.c (aarch64_gen_compare_reg_maybe_ze): New. (aarch64_split_compare_and_swap): Use it. (aarch64_expand_compare_and_swap): Likewise. Remove convert_modes; test oldval against the proper predicate. * config/aarch64/atomics.md (@atomic_compare_and_swap): Use nonmemory_operand for expected. (cas_short_expected_pred): New. (@aarch64_compare_and_swap): Use it; use "rn" not "rI" to match. (@aarch64_compare_and_swap): Use "rn" not "rI" for expected. * config/aarch64/predicates.md (aarch64_plushi_immediate): New. (aarch64_plushi_operand): New. --- gcc/config/aarch64/aarch64.c | 85 ++++++++++++++++++++------------ gcc/config/aarch64/atomics.md | 21 ++++---- gcc/config/aarch64/predicates.md | 12 +++++ 3 files changed, 77 insertions(+), 41 deletions(-) diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index a0ba358c2f1..c0f2d296342 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -1613,6 +1613,33 @@ aarch64_gen_compare_reg (RTX_CODE code, rtx x, rtx y) return cc_reg; } +/* Similarly, but maybe zero-extend Y if Y_MODE < SImode. */ + +static rtx +aarch64_gen_compare_reg_maybe_ze(RTX_CODE code, rtx x, rtx y, + machine_mode y_mode) +{ + if (y_mode == E_QImode || y_mode == E_HImode) + { + if (CONST_INT_P (y)) + y = GEN_INT (INTVAL (y) & GET_MODE_MASK (y_mode)); + else + { + rtx t, cc_reg; + machine_mode cc_mode; + + t = gen_rtx_ZERO_EXTEND (SImode, y); + t = gen_rtx_COMPARE (CC_SWPmode, t, x); + cc_mode = CC_SWPmode; + cc_reg = gen_rtx_REG (cc_mode, CC_REGNUM); + emit_set_insn (cc_reg, t); + return cc_reg; + } + } + + return aarch64_gen_compare_reg (code, x, y); +} + /* Build the SYMBOL_REF for __tls_get_addr. */ static GTY(()) rtx tls_get_addr_libfunc; @@ -14138,8 +14165,8 @@ aarch64_emit_unlikely_jump (rtx insn) void aarch64_expand_compare_and_swap (rtx operands[]) { - rtx bval, rval, mem, oldval, newval, is_weak, mod_s, mod_f, x; - machine_mode mode, cmp_mode; + rtx bval, rval, mem, oldval, newval, is_weak, mod_s, mod_f, x, cc_reg; + machine_mode mode, r_mode; bval = operands[0]; rval = operands[1]; @@ -14150,56 +14177,50 @@ aarch64_expand_compare_and_swap (rtx operands[]) mod_s = operands[6]; mod_f = operands[7]; mode = GET_MODE (mem); - cmp_mode = mode; /* Normally the succ memory model must be stronger than fail, but in the unlikely event of fail being ACQUIRE and succ being RELEASE we need to promote succ to ACQ_REL so that we don't lose the acquire semantics. */ - if (is_mm_acquire (memmodel_from_int (INTVAL (mod_f))) && is_mm_release (memmodel_from_int (INTVAL (mod_s)))) mod_s = GEN_INT (MEMMODEL_ACQ_REL); - switch (mode) + r_mode = mode; + if (mode == QImode || mode == HImode) { - case E_QImode: - case E_HImode: - /* For short modes, we're going to perform the comparison in SImode, - so do the zero-extension now. */ - cmp_mode = SImode; - rval = gen_reg_rtx (SImode); - oldval = convert_modes (SImode, mode, oldval, true); - /* Fall through. */ - - case E_SImode: - case E_DImode: - /* Force the value into a register if needed. */ - if (TARGET_LSE || !aarch64_plus_operand (oldval, mode)) - oldval = force_reg (cmp_mode, oldval); - break; - - default: - gcc_unreachable (); + r_mode = SImode; + rval = gen_reg_rtx (r_mode); } if (TARGET_LSE) { + /* Oldval always requires a register. We also must not clobber + oldval when writing to rval, so that we can compare afterward. */ + oldval = force_reg (mode, oldval); if (reg_overlap_mentioned_p (rval, oldval)) - rval = gen_reg_rtx (cmp_mode); + rval = gen_reg_rtx (r_mode); + emit_insn (gen_aarch64_compare_and_swap_lse (mode, rval, mem, oldval, newval, mod_s)); - aarch64_gen_compare_reg (EQ, rval, oldval); + cc_reg = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode); } else - emit_insn (gen_aarch64_compare_and_swap (mode, rval, mem, oldval, newval, - is_weak, mod_s, mod_f)); + { + /* The oldval predicate varies by mode. Test it and force to reg. */ + insn_code code = code_for_aarch64_compare_and_swap (mode); + if (!insn_data[code].operand[2].predicate (oldval, mode)) + oldval = force_reg (mode, oldval); - if (mode == QImode || mode == HImode) + emit_insn (GEN_FCN (code) (rval, mem, oldval, newval, + is_weak, mod_s, mod_f)); + cc_reg = gen_rtx_REG (CCmode, CC_REGNUM); + } + + if (r_mode != mode) rval = gen_lowpart (mode, rval); emit_move_insn (operands[1], rval); - x = gen_rtx_REG (CCmode, CC_REGNUM); - x = gen_rtx_EQ (SImode, x, const0_rtx); + x = gen_rtx_EQ (SImode, cc_reg, const0_rtx); emit_insn (gen_rtx_SET (bval, x)); } @@ -14314,10 +14335,10 @@ aarch64_split_compare_and_swap (rtx operands[]) } else { - cond = aarch64_gen_compare_reg (NE, rval, oldval); + cond = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode); x = gen_rtx_NE (VOIDmode, cond, const0_rtx); x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, - gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); + gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); } diff --git a/gcc/config/aarch64/atomics.md b/gcc/config/aarch64/atomics.md index 9f00dd3c68e..c00a18675b4 100644 --- a/gcc/config/aarch64/atomics.md +++ b/gcc/config/aarch64/atomics.md @@ -24,8 +24,8 @@ [(match_operand:SI 0 "register_operand" "") ;; bool out (match_operand:ALLI 1 "register_operand" "") ;; val out (match_operand:ALLI 2 "aarch64_sync_memory_operand" "") ;; memory - (match_operand:ALLI 3 "general_operand" "") ;; expected - (match_operand:ALLI 4 "aarch64_reg_or_zero" "") ;; desired + (match_operand:ALLI 3 "nonmemory_operand" "") ;; expected + (match_operand:ALLI 4 "aarch64_reg_or_zero" "") ;; desired (match_operand:SI 5 "const_int_operand") ;; is_weak (match_operand:SI 6 "const_int_operand") ;; mod_s (match_operand:SI 7 "const_int_operand")] ;; mod_f @@ -36,19 +36,22 @@ } ) +(define_mode_attr cas_short_expected_pred + [(QI "aarch64_reg_or_imm") (HI "aarch64_plushi_operand")]) + (define_insn_and_split "@aarch64_compare_and_swap" [(set (reg:CC CC_REGNUM) ;; bool out (unspec_volatile:CC [(const_int 0)] UNSPECV_ATOMIC_CMPSW)) - (set (match_operand:SI 0 "register_operand" "=&r") ;; val out + (set (match_operand:SI 0 "register_operand" "=&r") ;; val out (zero_extend:SI (match_operand:SHORT 1 "aarch64_sync_memory_operand" "+Q"))) ;; memory (set (match_dup 1) (unspec_volatile:SHORT - [(match_operand:SI 2 "aarch64_plus_operand" "rI") ;; expected + [(match_operand:SHORT 2 "" "rn") ;; expected (match_operand:SHORT 3 "aarch64_reg_or_zero" "rZ") ;; desired - (match_operand:SI 4 "const_int_operand") ;; is_weak - (match_operand:SI 5 "const_int_operand") ;; mod_s - (match_operand:SI 6 "const_int_operand")] ;; mod_f + (match_operand:SI 4 "const_int_operand") ;; is_weak + (match_operand:SI 5 "const_int_operand") ;; mod_s + (match_operand:SI 6 "const_int_operand")] ;; mod_f UNSPECV_ATOMIC_CMPSW)) (clobber (match_scratch:SI 7 "=&r"))] "" @@ -68,7 +71,7 @@ (match_operand:GPI 1 "aarch64_sync_memory_operand" "+Q")) ;; memory (set (match_dup 1) (unspec_volatile:GPI - [(match_operand:GPI 2 "aarch64_plus_operand" "rI") ;; expect + [(match_operand:GPI 2 "aarch64_plus_operand" "rn") ;; expect (match_operand:GPI 3 "aarch64_reg_or_zero" "rZ") ;; desired (match_operand:SI 4 "const_int_operand") ;; is_weak (match_operand:SI 5 "const_int_operand") ;; mod_s @@ -91,7 +94,7 @@ (match_operand:SHORT 1 "aarch64_sync_memory_operand" "+Q"))) ;; memory (set (match_dup 1) (unspec_volatile:SHORT - [(match_operand:SI 2 "register_operand" "0") ;; expected + [(match_operand:SHORT 2 "register_operand" "0") ;; expected (match_operand:SHORT 3 "aarch64_reg_or_zero" "rZ") ;; desired (match_operand:SI 4 "const_int_operand")] ;; mod_s UNSPECV_ATOMIC_CMPSW))] diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md index 5b08b03c586..4c75eff3e5a 100644 --- a/gcc/config/aarch64/predicates.md +++ b/gcc/config/aarch64/predicates.md @@ -114,6 +114,18 @@ (ior (match_operand 0 "register_operand") (match_operand 0 "aarch64_plus_immediate"))) +(define_predicate "aarch64_plushi_immediate" + (match_code "const_int") +{ + HOST_WIDE_INT val = INTVAL (op); + /* The HImode value must be zero-extendable to an SImode plus_operand. */ + return ((val & 0xfff) == val || sext_hwi (val & 0xf000, 16) == val); +}) + +(define_predicate "aarch64_plushi_operand" + (ior (match_operand 0 "register_operand") + (match_operand 0 "aarch64_plushi_immediate"))) + (define_predicate "aarch64_pluslong_immediate" (and (match_code "const_int") (match_test "(INTVAL (op) < 0xffffff && INTVAL (op) > -0xffffff)")))