From patchwork Tue Aug 22 10:38:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Szabolcs Nagy X-Patchwork-Id: 1824087 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=F7gSD1N7; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (ip-8-43-85-97.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RVQpG6R7Lz1yNm for ; Tue, 22 Aug 2023 20:39:50 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id D2130382DC4B for ; Tue, 22 Aug 2023 10:39:48 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org D2130382DC4B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1692700788; bh=gEhevAx53iZjQ1U25K0bkOPj5pKwkUTVcrC/Ok+qvZg=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=F7gSD1N7OyhlFFMfYqQxCzLXy+eaY7i21IzbG1y9iZBMdjHxn0snjOGIDN48gnSpv KOm9R3BC4KPGfbEtQed301vMxGuf5z7g8/fsPS4XQB9grcg0ZvxyB3B7RNdpLABsgA qn6SoVSesd8hMx+r/uJqb8ddcu/9OjRS/pQB2CrM= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR02-AM0-obe.outbound.protection.outlook.com (mail-am0eur02on2061.outbound.protection.outlook.com [40.107.247.61]) by sourceware.org (Postfix) with ESMTPS id ACF7B38319C2 for ; Tue, 22 Aug 2023 10:38:42 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org ACF7B38319C2 Received: from AS8PR04CA0207.eurprd04.prod.outlook.com (2603:10a6:20b:2f3::32) by AS8PR08MB6120.eurprd08.prod.outlook.com (2603:10a6:20b:299::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6699.24; Tue, 22 Aug 2023 10:38:38 +0000 Received: from AM7EUR03FT019.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:2f3:cafe::e) by AS8PR04CA0207.outlook.office365.com (2603:10a6:20b:2f3::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6699.20 via Frontend Transport; Tue, 22 Aug 2023 10:38:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM7EUR03FT019.mail.protection.outlook.com (100.127.140.245) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6723.15 via Frontend Transport; Tue, 22 Aug 2023 10:38:38 +0000 Received: ("Tessian outbound c99fbc01d472:v175"); Tue, 22 Aug 2023 10:38:37 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 29ab85eb866b1996 X-CR-MTA-TID: 64aa7808 Received: from 61686f041146.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 619A81AF-94DF-4B09-8375-290245C88BE9.1; Tue, 22 Aug 2023 10:38:30 +0000 Received: from EUR05-AM6-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 61686f041146.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 22 Aug 2023 10:38:30 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MvCqdqSK7/EPck69LLlZuPlrYQqscXzdU8nrKHJhrjik+hTTpLQV2yKKfAFVcvP5l0S960x8g0ECRujZDrs9aGpImYHbSN3MfO9fvWDaqVBXOJg7ZO7U1VWHWvY9+KN6k3PDw4gEI5S6NimmNemmWQImCXKqtyk2H64Ejsxw5Bs7K0dQCwhwwIjQIFsVQ/IMw/HSULLCJQv4hXZTXoe2udr+IhIrfa/X70S//F7Sf/oYVddT7t/Py6KfcI2sNEPMlGe804XYA1tSC41e33GchUxkJtXmyjm8fWrzMi2uR3f9XO1RsvAu2hKD+LL3GVr6hxcmukTTgn3cAXyABtpheQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gEhevAx53iZjQ1U25K0bkOPj5pKwkUTVcrC/Ok+qvZg=; b=ZQMm2s+3x2IZxo83v5ddI1zLW/ZDk2YlyF7Xx47z7Tvq+Z8G8uSkWmk7Lw1csb+N4bHjMMJsFHfuJFnRu6BLgIzZifpZ5ag96p6tzDbbjgDUohU/yi648dfTtKQP7vmGhJ/7hIodu9AyRb4uDuwrSL2xpzkTVs4DRrHQ0i2w2av142SjktVT0c2Bnu6zehelhXoTCMKeeRZBTbt+Ls7ewVJLeifwnAAF3rSdXYwtL03zXXW3xrwjRivxg9y1dJCwYOBF+HsIykXzrbPO2FOqWhfax54eUvsZlmx+RHZhJVhVd4uG+BaojGm23+gZavPlrj2MBouaD3Cm8s+xd6vTLA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none Received: from AM6PR0502CA0048.eurprd05.prod.outlook.com (2603:10a6:20b:56::25) by AS8PR08MB6741.eurprd08.prod.outlook.com (2603:10a6:20b:353::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6699.24; Tue, 22 Aug 2023 10:38:29 +0000 Received: from AM7EUR03FT005.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:56:cafe::58) by AM6PR0502CA0048.outlook.office365.com (2603:10a6:20b:56::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6699.20 via Frontend Transport; Tue, 22 Aug 2023 10:38:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234) smtp.mailfrom=arm.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 40.67.248.234 as permitted sender) receiver=protection.outlook.com; client-ip=40.67.248.234; helo=nebula.arm.com; pr=C Received: from nebula.arm.com (40.67.248.234) by AM7EUR03FT005.mail.protection.outlook.com (100.127.140.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6723.15 via Frontend Transport; Tue, 22 Aug 2023 10:38:28 +0000 Received: from AZ-NEU-EX03.Arm.com (10.251.24.31) by AZ-NEU-EX03.Arm.com (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Tue, 22 Aug 2023 10:38:27 +0000 Received: from armchair.cambridge.arm.com (10.2.80.71) by mail.arm.com (10.251.24.31) with Microsoft SMTP Server id 15.1.2507.27 via Frontend Transport; Tue, 22 Aug 2023 10:38:27 +0000 To: , , , Subject: [PATCH 03/11] aarch64: Use br instead of ret for eh_return Date: Tue, 22 Aug 2023 11:38:27 +0100 Message-ID: <913cd5eb33e01ad279915b4a1f0ce4bd7afd5ad7.1692699125.git.szabolcs.nagy@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-EOPAttributedMessage: 1 X-MS-TrafficTypeDiagnostic: AM7EUR03FT005:EE_|AS8PR08MB6741:EE_|AM7EUR03FT019:EE_|AS8PR08MB6120:EE_ X-MS-Office365-Filtering-Correlation-Id: 446e5445-3bf0-4c1d-0b67-08dba2fbf1e7 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: msI9PdYgbY+cdc2gX3HQDzDnG6ALq74ZUV+pD182hIRqr6yvhcHw9PS4EhC+OMtXHH/z/2lagZbkMS3PdI+3FyqNy/hKrE3Bvovp6GHxCIJ31Sah+6lDtu+HNrSLIaTGmPflzi0zaEu7KvXiygXuDSWY603ZT8wZV7Y7sMyzCbNeeydn244gQtiG6Fe1sE9uuuBqiNkQwL86ZP9qmGecQ3O5yum81aaLykNYE+u4ZhUH2mZxHEQjd8tAQqZi5rs+ob6sBUIoGCjAYool55gpZbJ7ZW/m0EjYqagqlxOK8N1RkzF9W0REFmIHZ0UP9VGRwiUpRH0n8sfkoNgDX1Wxg/KohtZLHbRT1fXSIrobLPahpdShPRPHwMtZJxIZeMgj8fHVA9xy7lXkFxErw0dUVCdlYhx/DMS7jrqs0yYDHtqC5C3ucUHgWAhI7hNe3IBUV/cFwKqRZLGTsZxIay1tJrGXyVvYnBzGhUJziXNUIPLOmgkSMRxMq3zrE6rrQrVv7RJeMLG5L5X7NwYGVfqMgoxgSBLWtVLbafSSdZAA0JRCUthBRmZ2RLs2DljKvXDWXn5JWjSj+HvzANVJA0Qb5UzEyIdH4i0itL1XblV2vcXLFuWdSN/8ZRozeZ8rqnCG3svqcD8BV1zEZCNwyamIu571RZbhSlS/W0ptIa9Tql8ByL1/fiSQMrI6nCkjRLAN4vfrVaXGdQUCvQVpGFzHaz2e3j2WeUW+CfUyEDlmvClvBMHGF2TIcPApXz0oak6Hk+Dd+ZUeIVhUGWVbye7Sng== X-Forefront-Antispam-Report-Untrusted: CIP:40.67.248.234; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230031)(4636009)(396003)(136003)(376002)(346002)(39860400002)(1800799009)(186009)(451199024)(82310400011)(36840700001)(40470700004)(46966006)(6636002)(70206006)(70586007)(316002)(66899024)(110136005)(8676002)(8936002)(2616005)(40460700003)(36756003)(41300700001)(82740400003)(356005)(81166007)(478600001)(40480700001)(83380400001)(30864003)(2906002)(86362001)(7696005)(47076005)(36860700001)(336012)(44832011)(426003)(5660300002)(26005)(36900700001); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6741 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM7EUR03FT019.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 4704f695-8310-4894-86ca-08dba2fbec64 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: oANP8sHRPXntVvOBEJ34nF+o8qVgDm0/cU40iuWmthmocZXhp1aJ36aw3oVxJReYxdGJgZNAdWpNIt6BlSg7V/JXUOo0KESGhI371rH8F3pOa1MztmbIOGc6778STAS8jQyEc9KQN6nFqt8LtyAjBBpAsnUBmuCwZyV8JsTFuy5aeWOCO/SUwF/LTlFR+NB/i+2aRHM2I/NZlCcCx7HyCSND/I7nv47btbWG0GXBymoTX4zHCsrwiw2MxD1KaWFTWZSpjCbfS0T0tJ7XT+mTQo06qJo8OsKnZsGBhdo1lO4Q85K7YsrOlZFHkqHu0MveeLb2v2ZmEewptA8yYe3nJIafXxXCKCrJPliO+ZGHvexXE/fH+8bsdQUvxhq1Ml7uL7dgAe5uiiIqGTquJ0dc6AvOju1r8/6mJmltWsdMtcx1yny4IYjmEQd2w3i2JEzyw09Vv/uth0mc9hLtNF1OqXLMIObqPBAf0SAOtOXjM47WAoN7R3OHB5e2jYrvyie2vRdUspiu9hCZ0MTRCucbWquC1u9udqj68cX6P3X44wkj8mr2ZyrLaU94osw18QwJ7qN1HlunKr9ueL7hf1BX6dCAL+jyJsEYDJ7qRgxHLZKmLd22WJAaNhhNm9LMR16n/BUn+8O7pgkQCw9fXEfkEX7HspBbHSBY+LDJ8+tfR1X4iVHUU49vpBa2siy0VQjEnItGeaP4y3VIVFYo1ff90RNHJ0gVReRSdJwHWggMXzwP7uEWYvXLa72Exgk4d7tQ X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(136003)(376002)(346002)(39860400002)(1800799009)(186009)(451199024)(82310400011)(36840700001)(40470700004)(46966006)(6636002)(70206006)(70586007)(316002)(66899024)(110136005)(8676002)(8936002)(2616005)(40460700003)(36756003)(41300700001)(82740400003)(81166007)(478600001)(40480700001)(83380400001)(30864003)(2906002)(86362001)(7696005)(47076005)(36860700001)(336012)(44832011)(426003)(5660300002)(26005); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Aug 2023 10:38:38.1217 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 446e5445-3bf0-4c1d-0b67-08dba2fbf1e7 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM7EUR03FT019.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6120 X-Spam-Status: No, score=-11.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Szabolcs Nagy via Gcc-patches From: Szabolcs Nagy Reply-To: Szabolcs Nagy Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" The expected way to handle eh_return is to pass the stack adjustment offset and landing pad address via EH_RETURN_STACKADJ_RTX EH_RETURN_HANDLER_RTX to the epilogue that is shared between normal return paths and the eh_return paths. EH_RETURN_HANDLER_RTX is the stack slot of the return address that is overwritten with the landing pad in the eh_return case and EH_RETURN_STACKADJ_RTX is a register added to sp right before return and it is set to 0 in the normal return case. The issue with this design is that eh_return and normal return may require different return sequence but there is no way to distinguish the two cases in the epilogue (the stack adjustment may be 0 in the eh_return case too). The reason eh_return and normal return requires different return sequence is that control flow integrity hardening may need to treat eh_return as a forward-edge transfer (it is not returning to the previous stack frame) and normal return as a backward-edge one. In case of AArch64 forward-edge is protected by BTI and requires br instruction and backward-edge is protected by PAUTH or GCS and requires ret (or authenticated ret) instruction. This patch resolves the issue by using the EH_RETURN_STACKADJ_RTX register only as a flag that is set to 1 in the eh_return paths (it is 0 in normal return paths) and introduces AARCH64_EH_RETURN_STACKADJ_RTX AARCH64_EH_RETURN_HANDLER_RTX to pass the actual stack adjustment and landing pad address to the epilogue in the eh_return case. Then the epilogue can use the right return sequence based on the EH_RETURN_STACKADJ_RTX flag. The handler could be passed the old way via clobbering the return address, but since now the eh_return case can be distinguished, the handler can be in a different register than x30 and no stack frame is needed for eh_return. The new code generation for functions with eh_return is not amazing, since x5 and x6 is assumed to be used by the epilogue even in the normal return path, not just for eh_return. But only the unwinder is expected to use eh_return so this is fine. This patch fixes a return to anywhere gadget in the unwinder with existing standard branch protection as well as makes EH return compatible with the Guarded Control Stack (GCS) extension. gcc/ChangeLog: * config/aarch64/aarch64-protos.h (aarch64_eh_return_handler_rtx): Remove. (aarch64_eh_return): New. * config/aarch64/aarch64.cc (aarch64_return_address_signing_enabled): Sign return address even in functions with eh_return. (aarch64_epilogue_uses): Mark two registers as used. (aarch64_expand_epilogue): Conditionally return with br or ret. (aarch64_eh_return_handler_rtx): Remove. (aarch64_eh_return): New. * config/aarch64/aarch64.h (EH_RETURN_HANDLER_RTX): Remove. (AARCH64_EH_RETURN_STACKADJ_REGNUM): Define. (AARCH64_EH_RETURN_STACKADJ_RTX): Define. (AARCH64_EH_RETURN_HANDLER_REGNUM): Define. (AARCH64_EH_RETURN_HANDLER_RTX): Define. * config/aarch64/aarch64.md (eh_return): New. --- gcc/config/aarch64/aarch64-protos.h | 2 +- gcc/config/aarch64/aarch64.cc | 106 +++++++++++++++------------- gcc/config/aarch64/aarch64.h | 11 ++- gcc/config/aarch64/aarch64.md | 8 +++ 4 files changed, 73 insertions(+), 54 deletions(-) diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h index 70303d6fd95..5d1834162a4 100644 --- a/gcc/config/aarch64/aarch64-protos.h +++ b/gcc/config/aarch64/aarch64-protos.h @@ -855,7 +855,7 @@ machine_mode aarch64_hard_regno_caller_save_mode (unsigned, unsigned, machine_mode); int aarch64_uxt_size (int, HOST_WIDE_INT); int aarch64_vec_fpconst_pow_of_2 (rtx); -rtx aarch64_eh_return_handler_rtx (void); +void aarch64_eh_return (rtx); rtx aarch64_mask_from_zextract_ops (rtx, rtx); const char *aarch64_output_move_struct (rtx *operands); rtx aarch64_return_addr_rtx (void); diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index eba5d4a7e04..36cd172d182 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -8972,17 +8972,6 @@ aarch64_return_address_signing_enabled (void) /* This function should only be called after frame laid out. */ gcc_assert (cfun->machine->frame.laid_out); - /* Turn return address signing off in any function that uses - __builtin_eh_return. The address passed to __builtin_eh_return - is not signed so either it has to be signed (with original sp) - or the code path that uses it has to avoid authenticating it. - Currently eh return introduces a return to anywhere gadget, no - matter what we do here since it uses ret with user provided - address. An ideal fix for that is to use indirect branch which - can be protected with BTI j (to some extent). */ - if (crtl->calls_eh_return) - return false; - /* If signing scope is AARCH_FUNCTION_NON_LEAF, we only sign a leaf function if its LR is pushed onto stack. */ return (aarch_ra_sign_scope == AARCH_FUNCTION_ALL @@ -9932,9 +9921,8 @@ aarch64_allocate_and_probe_stack_space (rtx temp1, rtx temp2, Note that in the case of sibcalls, the values "used by the epilogue" are considered live at the start of the called function. - For SIMD functions we need to return 1 for FP registers that are saved and - restored by a function but are not zero in call_used_regs. If we do not do - this optimizations may remove the restore of the register. */ + For EH return we need to keep two registers alive for stack adjustment + and return address. */ int aarch64_epilogue_uses (int regno) @@ -9944,6 +9932,13 @@ aarch64_epilogue_uses (int regno) if (regno == LR_REGNUM) return 1; } + + if (!epilogue_completed && crtl->calls_eh_return) + { + if (regno == AARCH64_EH_RETURN_STACKADJ_REGNUM + || regno == AARCH64_EH_RETURN_HANDLER_REGNUM) + return 1; + } return 0; } @@ -10342,6 +10337,30 @@ aarch64_expand_epilogue (bool for_sibcall) RTX_FRAME_RELATED_P (insn) = 1; } + /* Stack adjustment for exception handler. */ + if (crtl->calls_eh_return && !for_sibcall) + { + /* If the EH_RETURN_STACKADJ_RTX flag is set then we need + to unwind the stack and jump to the handler, otherwise + skip this eh_return logic and continue with normal + return after the label. We have already reset the CFA + to be SP; letting the CFA move during this adjustment + is just as correct as retaining the CFA from the body + of the function. Therefore, do nothing special. */ + rtx label = gen_label_rtx (); + rtx x = gen_rtx_EQ (VOIDmode, EH_RETURN_STACKADJ_RTX, const0_rtx); + x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, + gen_rtx_LABEL_REF (Pmode, label), pc_rtx); + rtx jump = emit_jump_insn (gen_rtx_SET (pc_rtx, x)); + JUMP_LABEL (jump) = label; + LABEL_NUSES (label)++; + emit_insn (gen_add2_insn (stack_pointer_rtx, + AARCH64_EH_RETURN_STACKADJ_RTX)); + emit_jump_insn (gen_indirect_jump (AARCH64_EH_RETURN_HANDLER_RTX)); + emit_barrier (); + emit_label (label); + } + /* We prefer to emit the combined return/authenticate instruction RETAA, however there are three cases in which we must instead emit an explicit authentication instruction. @@ -10371,56 +10390,41 @@ aarch64_expand_epilogue (bool for_sibcall) RTX_FRAME_RELATED_P (insn) = 1; } - /* Stack adjustment for exception handler. */ - if (crtl->calls_eh_return && !for_sibcall) - { - /* We need to unwind the stack by the offset computed by - EH_RETURN_STACKADJ_RTX. We have already reset the CFA - to be SP; letting the CFA move during this adjustment - is just as correct as retaining the CFA from the body - of the function. Therefore, do nothing special. */ - emit_insn (gen_add2_insn (stack_pointer_rtx, EH_RETURN_STACKADJ_RTX)); - } - emit_use (gen_rtx_REG (DImode, LR_REGNUM)); if (!for_sibcall) emit_jump_insn (ret_rtx); } -/* Implement EH_RETURN_HANDLER_RTX. EH returns need to either return - normally or return to a previous frame after unwinding. +/* Implement the eh_return instruction pattern. Functions with EH returns + either return normally or return to a previous frame after unwinding. - An EH return uses a single shared return sequence. The epilogue is + The two cases use a single shared return sequence. The epilogue is exactly like a normal epilogue except that it has an extra input register (EH_RETURN_STACKADJ_RTX) which contains the stack adjustment that must be applied after the frame has been destroyed. An extra label is inserted before the epilogue which initializes this register to zero, and this is the entry point for a normal return. - An actual EH return updates the return address, initializes the stack - adjustment and jumps directly into the epilogue (bypassing the zeroing - of the adjustment). Since the return address is typically saved on the - stack when a function makes a call, the saved LR must be updated outside - the epilogue. - - This poses problems as the store is generated well before the epilogue, - so the offset of LR is not known yet. Also optimizations will remove the - store as it appears dead, even after the epilogue is generated (as the - base or offset for loading LR is different in many cases). - - To avoid these problems this implementation forces the frame pointer - in eh_return functions so that the location of LR is fixed and known early. - It also marks the store volatile, so no optimization is permitted to - remove the store. */ -rtx -aarch64_eh_return_handler_rtx (void) -{ - rtx tmp = gen_frame_mem (Pmode, - plus_constant (Pmode, hard_frame_pointer_rtx, UNITS_PER_WORD)); + An actual EH return initializes the stack adjustment then invokes this + target hook (which supposed to overwrite the return address) and then + jumps directly into the epilogue (bypassing the zeroing of the adjustment). + + We depart from the intended EH return logic by using two additional + registers to pass the handler and stack adjustment to the epilogue - /* Mark the store volatile, so no optimization is permitted to remove it. */ - MEM_VOLATILE_P (tmp) = true; - return tmp; + AARCH64_EH_RETURN_HANDLER_RTX + AARCH64_EH_RETURN_STACKADJ_RTX + + and set EH_RETURN_STACKADJ_RTX to 1 in the EH return path so it is a + flag that the epilogue can use to distinguish normal and EH returns. + This allows different return instructions in the two cases. The return + address is not modified for EH returns. */ +void +aarch64_eh_return (rtx handler) +{ + emit_move_insn (AARCH64_EH_RETURN_HANDLER_RTX, handler); + emit_move_insn (AARCH64_EH_RETURN_STACKADJ_RTX, EH_RETURN_STACKADJ_RTX); + emit_move_insn (EH_RETURN_STACKADJ_RTX, const1_rtx); } /* Output code to add DELTA to the first argument, and then jump diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h index c783cb96c48..fa68ef0057a 100644 --- a/gcc/config/aarch64/aarch64.h +++ b/gcc/config/aarch64/aarch64.h @@ -583,9 +583,16 @@ enum class aarch64_feature : unsigned char { /* Output assembly strings after .cfi_startproc is emitted. */ #define ASM_POST_CFI_STARTPROC aarch64_post_cfi_startproc -/* For EH returns X4 contains the stack adjustment. */ +/* For EH returns X4 is a flag that is set in the EH return + code paths and then X5 and X6 contain the stack adjustment + and return address respectively. */ #define EH_RETURN_STACKADJ_RTX gen_rtx_REG (Pmode, R4_REGNUM) -#define EH_RETURN_HANDLER_RTX aarch64_eh_return_handler_rtx () +#define AARCH64_EH_RETURN_STACKADJ_REGNUM R5_REGNUM +#define AARCH64_EH_RETURN_STACKADJ_RTX \ + gen_rtx_REG (Pmode, AARCH64_EH_RETURN_STACKADJ_REGNUM) +#define AARCH64_EH_RETURN_HANDLER_REGNUM R6_REGNUM +#define AARCH64_EH_RETURN_HANDLER_RTX \ + gen_rtx_REG (Pmode, AARCH64_EH_RETURN_HANDLER_REGNUM) #undef TARGET_COMPUTE_FRAME_LAYOUT #define TARGET_COMPUTE_FRAME_LAYOUT aarch64_layout_frame diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 01cf989641f..0a3474776f0 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -877,6 +877,14 @@ (define_expand "sibcall_epilogue" " ) +(define_expand "eh_return" + [(use (match_operand 0 "general_operand"))] + "" +{ + aarch64_eh_return (operands[0]); + DONE; +}) + (define_insn "*do_return" [(return)] ""