From patchwork Thu Nov 1 21:46:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 992090 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-488858-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="LANMs3EC"; dkim=pass (1024-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="CzmF2fkE"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 42mJhp15MXzB4Zx for ; Fri, 2 Nov 2018 08:47:37 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:in-reply-to:references; q=dns; s= default; b=UclqIRHnZU1UbyfRhoD1Cp7uAkFfY+0KtC8awJTF8O2qAhVaeAlwm /JT5EkbwDnXPFrRM0U19/gFhDeTMf0prrr6FLrB0/GMAqqiqGLs4O9EQz6feLYTu KGoCiu/K2t0lrXeOW7Zo0K+as6H6/756UvXHoxFlDVEEoabmrJ95/A= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:in-reply-to:references; s= default; bh=5q1l4E3fGYujsnAze/6kvqp8Y4g=; b=LANMs3ECq1PhQttpW0qq clZ3v5Y8YnLMc5vsU9gD8eDuLZLWzijNY7efoF3Aa6Qd1whFvx9Q+YPdREQXy9LV QJhmLmqiB/ViAhn74AXBmzB8d6sdevG2Y+HfV4hCbuS11ySfdQBzWEjw07bB4nEH Z1md1atk4M/vRhG9ZaYmJFI= Received: (qmail 49078 invoked by alias); 1 Nov 2018 21:46:59 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 48991 invoked by uid 89); 1 Nov 2018 21:46:58 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-26.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=HX-Received:sk:f2-v6mr, unify, HX-Received:cf02, codes X-HELO: mail-wm1-f68.google.com Received: from mail-wm1-f68.google.com (HELO mail-wm1-f68.google.com) (209.85.128.68) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 01 Nov 2018 21:46:57 +0000 Received: by mail-wm1-f68.google.com with SMTP id f19-v6so24157wmb.0 for ; Thu, 01 Nov 2018 14:46:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=muBTIrf7pGbPSI8ZhFvfqpmDBWlLihqwLwBCgBQc5Jo=; b=CzmF2fkEwRCXekW15Iaap4GPm3IFR4kHQ4JtikwaMMUfb9yfrThMcMb2n3lEfawHHT +ThpfK84oiQELsKJx7h/cYEpvaDAmV1toItcYZf6REHqKulrBY/HmA1GvXc+RDTip9b2 QF7Em8ZHQqpKMd9PvO1fPtMp7Jl6Oq/x3p+iI= Received: from cloudburst.Home ([2a02:c7f:504f:6300:a3de:88d8:75ae:bf4c]) by smtp.gmail.com with ESMTPSA id h18-v6sm21097360wro.0.2018.11.01.14.46.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 01 Nov 2018 14:46:53 -0700 (PDT) From: Richard Henderson To: gcc-patches@gcc.gnu.org Cc: ramana.radhakrishnan@arm.com, agraf@suse.de, marcus.shawcroft@arm.com, james.greenhalgh@arm.com, Richard Henderson Subject: [PATCH, AArch64, v3 3/6] aarch64: Tidy aarch64_split_compare_and_swap Date: Thu, 1 Nov 2018 21:46:45 +0000 Message-Id: <20181101214648.29432-4-richard.henderson@linaro.org> In-Reply-To: <20181101214648.29432-1-richard.henderson@linaro.org> References: <20181101214648.29432-1-richard.henderson@linaro.org> From: Richard Henderson With aarch64_track_speculation, we had extra code to do exactly what the !strong_zero_p path already did. The rest is reducing code duplication. * config/aarch64/aarch64 (aarch64_split_compare_and_swap): Disable strong_zero_p for aarch64_track_speculation; unify some code paths; use aarch64_gen_compare_reg instead of open-coding. --- gcc/config/aarch64/aarch64.c | 50 ++++++++++-------------------------- 1 file changed, 14 insertions(+), 36 deletions(-) diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index 942f2037235..b29f437aeaf 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -14767,13 +14767,11 @@ aarch64_emit_post_barrier (enum memmodel model) void aarch64_split_compare_and_swap (rtx operands[]) { - rtx rval, mem, oldval, newval, scratch; + rtx rval, mem, oldval, newval, scratch, x, model_rtx; machine_mode mode; bool is_weak; rtx_code_label *label1, *label2; - rtx x, cond; enum memmodel model; - rtx model_rtx; rval = operands[0]; mem = operands[1]; @@ -14794,7 +14792,8 @@ aarch64_split_compare_and_swap (rtx operands[]) CBNZ scratch, .label1 .label2: CMP rval, 0. */ - bool strong_zero_p = !is_weak && oldval == const0_rtx && mode != TImode; + bool strong_zero_p = (!is_weak && !aarch64_track_speculation && + oldval == const0_rtx && mode != TImode); label1 = NULL; if (!is_weak) @@ -14807,35 +14806,20 @@ aarch64_split_compare_and_swap (rtx operands[]) /* The initial load can be relaxed for a __sync operation since a final barrier will be emitted to stop code hoisting. */ if (is_mm_sync (model)) - aarch64_emit_load_exclusive (mode, rval, mem, - GEN_INT (MEMMODEL_RELAXED)); + aarch64_emit_load_exclusive (mode, rval, mem, GEN_INT (MEMMODEL_RELAXED)); else aarch64_emit_load_exclusive (mode, rval, mem, model_rtx); if (strong_zero_p) - { - if (aarch64_track_speculation) - { - /* Emit an explicit compare instruction, so that we can correctly - track the condition codes. */ - rtx cc_reg = aarch64_gen_compare_reg (NE, rval, const0_rtx); - x = gen_rtx_NE (GET_MODE (cc_reg), cc_reg, const0_rtx); - } - else - x = gen_rtx_NE (VOIDmode, rval, const0_rtx); - - x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, - gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); - aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); - } + x = gen_rtx_NE (VOIDmode, rval, const0_rtx); else { - cond = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode); - x = gen_rtx_NE (VOIDmode, cond, const0_rtx); - x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, - gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); - aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); + rtx cc_reg = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode); + x = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx); } + x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, + gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); + aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); aarch64_emit_store_exclusive (mode, scratch, mem, newval, model_rtx); @@ -14856,22 +14840,16 @@ aarch64_split_compare_and_swap (rtx operands[]) aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); } else - { - cond = gen_rtx_REG (CCmode, CC_REGNUM); - x = gen_rtx_COMPARE (CCmode, scratch, const0_rtx); - emit_insn (gen_rtx_SET (cond, x)); - } + aarch64_gen_compare_reg (NE, scratch, const0_rtx); emit_label (label2); + /* If we used a CBNZ in the exchange loop emit an explicit compare with RVAL to set the condition flags. If this is not used it will be removed by later passes. */ if (strong_zero_p) - { - cond = gen_rtx_REG (CCmode, CC_REGNUM); - x = gen_rtx_COMPARE (CCmode, rval, const0_rtx); - emit_insn (gen_rtx_SET (cond, x)); - } + aarch64_gen_compare_reg (NE, rval, const0_rtx); + /* Emit any final barrier needed for a __sync operation. */ if (is_mm_sync (model)) aarch64_emit_post_barrier (model);