From patchwork Wed Jul 20 16:50:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wilco Dijkstra X-Patchwork-Id: 1658718 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=gWGIHJvi; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4Lp1tM2qsdz9sFs for ; Thu, 21 Jul 2022 02:51:07 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 7A45C3858281 for ; Wed, 20 Jul 2022 16:51:00 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 7A45C3858281 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1658335860; bh=PGAM+T6CrUZBOb9CXrv0SeiFFWhp0xypGq0Vycj9oe8=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=gWGIHJviohiK8e2VYmzTchGqfxCzhrNMpt2IKD8zyGrMPjyLKG6bsKTy6NGLTj9xo 6f8MEzb7W+Q79cnExOI7cPPThgpN+StU9UgFGYdLgiKpFmSibe5nBg3VB1E5sAFvZw FiAyaVufsDC0xLVNw4GIvH0Zhn9cpb8udfdO/KlU= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from EUR03-AM5-obe.outbound.protection.outlook.com (mail-eopbgr30051.outbound.protection.outlook.com [40.107.3.51]) by sourceware.org (Postfix) with ESMTPS id BE8EA3858D28 for ; Wed, 20 Jul 2022 16:50:43 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org BE8EA3858D28 ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=kplBCOESk8ilo9fcvVR7Rv1oNXBhVufcF0cqynvaj/+V9rQYIbR68JiL5bN5JlVa7JLes/xmg2jp1pPKbYe1t4O3atPdXwkv6zN9EbKdLb+iGl8T7w8EELrEHZPW9W3mlrzEXPT70k4hQ1JIInKUlWi5hrXCrrJniXGp5UIYf8Ur9QAK/FTAfkrWdQx7AB+jaJUV52FYSx+2rAA/qbKjomrQStSTAlqfgmwfbcn2g6cwF3k7Zf5LV9a1M8RXkwO2c5gC6ccm1e3iDlOYB2aYfDBjQC7DEQ22zXtVkolJ5GEFgn3AtL/RcNWG7s5YLfW1L//K4EFbtG4V11/0MqJOEA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PGAM+T6CrUZBOb9CXrv0SeiFFWhp0xypGq0Vycj9oe8=; b=iRwYoJWNsCAQpPhg8EWQZfNf2EIx7zwNJ/hFNj8y3Am13AJmeq0rjpIn+22GaprdxZWhnaNfpajcDOCIH13X9KVk5Ab1NB3k4s8rKvcC8z+wZu0+SiTwFX2XSym+FUM2oLsSLvJ0qaQrwn+ilr22lQ+A9JhKj2BxjVa8T7T9nMobDhL/zN19haRAE4LUtlMt71ix64SWYrT7ZI3AECrhm9dRyC5yS5BBH2XPMToYwzoLQZ4gg5kaD+43celcfkiqqXmwekCS6xbGsPegH0SiA3xjWnZDn5mO7lhqZIXq0TyiYRVk5o4a9e4aPyWpgX6k6DVRknBXNY+kY8de9UaHPQ== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=sourceware.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) Received: from DB7PR02CA0016.eurprd02.prod.outlook.com (2603:10a6:10:52::29) by AS8PR08MB8088.eurprd08.prod.outlook.com (2603:10a6:20b:54d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.23; Wed, 20 Jul 2022 16:50:40 +0000 Received: from DBAEUR03FT022.eop-EUR03.prod.protection.outlook.com (2603:10a6:10:52:cafe::d3) by DB7PR02CA0016.outlook.office365.com (2603:10a6:10:52::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5458.18 via Frontend Transport; Wed, 20 Jul 2022 16:50:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT022.mail.protection.outlook.com (100.127.142.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.12 via Frontend Transport; Wed, 20 Jul 2022 16:50:40 +0000 Received: ("Tessian outbound 63c09d5d38ac:v123"); Wed, 20 Jul 2022 16:50:40 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: d5244686a7a07588 X-CR-MTA-TID: 64aa7808 Received: from c4e34289de17.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id F5FDF194-7C4A-4460-92DF-04118AC4B81A.1; Wed, 20 Jul 2022 16:50:34 +0000 Received: from EUR04-HE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c4e34289de17.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Wed, 20 Jul 2022 16:50:34 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SX+jioP5XdmuDYZ1roG5O8VQpgfzLme28NgBv9VWHRJK6taBm6uPujhTrtVJaPbdsGDKi2Z7T3eQOVhh/zn7TyCY7IeIRDbUOFdVN2u0l3DEQtxi2++mLHdgy4yOE0i/iNWoZDhUSYglGIHacAqijCXmofUCdHaz9dtqDDu5gq8aBsRZ5Q4w2Sw1zQ9o6QFd5XDYj8AqljsZ9/boWtWpb4oyDwRxGghqL3Av8W1t9ONfq9iDMxuURlOtJDquctQwY2Tfrksgmo/rrdChZiT3t/l5v+z8tJ+Vo9bagRilIUf78r+wxut7kX7n7oGzx4VRx20WB5v7/hMmbdQF/BG8Bg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PGAM+T6CrUZBOb9CXrv0SeiFFWhp0xypGq0Vycj9oe8=; b=QKUh0pnVcjJXC3Gp/eHc5sY5Z4eSRlqjITP9wCm3EWhOzbQfa5D6gEqAiEqYZUWnb527lbH3cm80rIutZgdEVVfEkRVpABN9h7Lp9QxStC2XH2GEDzvCSnY0pLfBVPjKUc9ZpCjjtTeZtdY1LR+7wd1AJEtfamu07BL95qZRiPjAZuPhKFhdkGnIVPIIGM0IdDSclO2NUP3yVIr0QtvhsijOiw0e0DuD0/JqxaUytfE861FBaMMQzZne4i+ZBCS3F6Ttpwt0IvmODaJ7NbarvOTSXtg8Rduo7XYP3q6hsFFldZXMdPX1uOGCD2Jwxe4zPt80CO6WPyajc9km82hkJQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com (2603:10a6:203:3c::14) by VI1PR08MB4061.eurprd08.prod.outlook.com (2603:10a6:803:e7::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5438.14; Wed, 20 Jul 2022 16:50:31 +0000 Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581]) by AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581%5]) with mapi id 15.20.5458.018; Wed, 20 Jul 2022 16:50:31 +0000 To: 'GNU C Library' Subject: [PATCH] Remove atomic_forced_read Thread-Topic: [PATCH] Remove atomic_forced_read Thread-Index: AQHYnFhGhzOkOtP7pE2dyyD70J+wMA== Date: Wed, 20 Jul 2022 16:50:31 +0000 Message-ID: Accept-Language: en-GB, en-US Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-MS-Office365-Filtering-Correlation-Id: 3d0216cd-f473-4cc9-7c46-08da6a6ffaba x-ms-traffictypediagnostic: VI1PR08MB4061:EE_|DBAEUR03FT022:EE_|AS8PR08MB8088:EE_ x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: nvw72I0WmN84iRtkc8F1SJ6/8aeWEctH55hBmkFpKIkt0/nl07TmGR6w+w7N3sZ7ld+Pv8m5DTIAH/VtNVXzNOVJZLVX1YjsIWfdj1AN3IH1e2LwXBSU6BzOF8U2v2QHkJ6lH/9oh8tSrObSMRVoIMHEEQ/2e58wjnVrqZMJdBinRjhAL6QOS2AnBMhAoKotSLkMQ23HvXhBeGMT/5Gstg+zHw/UBab03IUVkOjPVUmSR6MwIHzFYbKXu9OwGe/OHL/Wd105/cV7ILqELVD+F1GBNZwmDCtkLHVKbT5gBzhro7NG6fuh9YJLGA7YMOCRmn2cu+LG5iKEziP7+DC6o8RlanEvoKheSj7VNbe837sqnYv0FI3UvHrBt0MMavDa4OjpzLVjJMI1/VHHTfG3OsMtXsh79L0xlDIvLUuAdhG9KshsyN+DbaoFr9sKTmgu0/h4OdJDtgBzg/osHlXy1rDl2ZBXL4A7165QmeiYi3pz+J7wT8j1y9NnwOFJPus7L5btg6vHpv2Fc+Z2AmVe8LwaODKicn8sNFV4UvbyPLFYhoML0GNj7C8yZHoHUMef+NzmESsDFTbY0r+fEwnjuu+Nzqr49P60IT+xc4F+K2g88AESILbud7aU11UKWWHncR8oXyexwXpQ4DPLOEM8AK1DoGRYdv+vvZucWvxroMoz2yvN/UiGQKa92KAF7u5fj6Y/Hr+PG5ngN6xegr/K4nQyGn+/NO61A5ZdYX/ijXJQ5fkdYdKaF3tVl/k6oI6Srf4vBt1h4Rl4nJ4tn9QilSejy6WY897x6jWYa+7Cx9WJyBdcBYI8tK6A4d/9Lmkp X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM5PR0801MB1668.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230016)(4636009)(396003)(346002)(366004)(39860400002)(136003)(376002)(6506007)(41300700001)(83380400001)(478600001)(186003)(26005)(7696005)(71200400001)(66476007)(6916009)(66446008)(64756008)(38070700005)(66946007)(122000001)(9686003)(91956017)(38100700002)(52536014)(8936002)(55016003)(66556008)(33656002)(2906002)(316002)(8676002)(5660300002)(76116006)(86362001); DIR:OUT; SFP:1101; MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4061 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT022.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 9b5b9154-38ea-46b5-34f8-08da6a6ff51f X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Rovg/ShNU81Hm6UuK9eCPiACLEyiCCUiLXmvrF14eFwRrN7Mkfsm4WReXEhdD37GZ3eoDTmsE4MOwUxvmN5JuNrAz9wK/Rx1CeMmVaXKGHceEHcH6C7h5qkexmiIfWyKrZHHntw0VIW/FH1VqX2hqnc0pAtWuXDPJ/bfkTFvoa2cDJpWDCFXPI9+ZJ3+OICwwuqudYqQNk0g+66KeYnSgS/jsN4uRFhPokoc4q7sSoxTd13mgu+eTeK9Nz29onvhqBLMF4sWjChYiltOzrwlem9RYmCsH+76vqT3PSn7tjKhA13cC1CnwRb3VsckVlyqxr3exV4Sfxj/0Oze4qXTWk9h355Ry9bgAEeLLPYMSGvwH7CCc+7X+49R8vqW8LBjeCvKN23q3JwjGoCC2mR5nneyIbslzziQ2HxlAPY8wDSe3Jq2Dih+2dHfmm2bDNsunq96/EII4VHZ3U7q8Jj0Te0BPZktXX6/0bxOI/c8cIXdvv6UEO/qnqRUCBuLVNDmcoFGfm98/pqB1+S41ulJypb3nWntAdNtkgM8Oc4puwiK1S22/anM5T+EvkY9ZZsrf4QcPfvIyCA1t/LLWjt2B1pzIER0FVutFY9q6soRb9GhAFJ7QRT3nGjDOxe3GjOtIU+pySpIoaTjcbvHKB0f6y9rJbr4eGNOKpBKHAunmEc4MpbxcB6NPLChmA1I3+veTVe5KgpPN0ojHxM5Qcnrv/M7gWzvmdWWsv8lxrfjCTVR/PEiKCRtdKfqv+3LuCiR12P5Jnmu4oKS9v6qPgzrnTd8fptH2hZ4T+yMCQC9KpDEzLwwU+WqlxstymUm/dla X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230016)(4636009)(346002)(396003)(136003)(376002)(39860400002)(46966006)(40470700004)(36840700001)(55016003)(478600001)(82310400005)(40480700001)(7696005)(8676002)(86362001)(6506007)(316002)(41300700001)(70206006)(2906002)(40460700003)(70586007)(52536014)(26005)(6916009)(81166007)(336012)(47076005)(33656002)(356005)(83380400001)(9686003)(36860700001)(8936002)(5660300002)(186003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jul 2022 16:50:40.6629 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3d0216cd-f473-4cc9-7c46-08da6a6ffaba X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT022.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8088 X-Spam-Status: No, score=-11.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Wilco Dijkstra via Libc-alpha From: Wilco Dijkstra Reply-To: Wilco Dijkstra Errors-To: libc-alpha-bounces+incoming=patchwork.ozlabs.org@sourceware.org Sender: "Libc-alpha" Remove the odd atomic_forced_read which is neither atomic nor forced. Some uses are completely redundant, so simply remove them. In other cases the intended use is to force a memory ordering, so use acquire load for those. In yet other cases their purpose is unclear, for example __nscd_cache_search appears to allow concurrent accesses to the cache while it is being garbage collected by another thread! Use relaxed atomic loads here to block spills from accidentally reloading memory that is being changed - however given there are multiple accesses without any synchronization, it is unclear how this could ever work reliably... Passes regress on AArch64, OK for commit? diff --git a/elf/dl-lookup.c b/elf/dl-lookup.c index 8cb32321da33ea4f41e4a6cee038c2a6697ad817..02c63a7062b2be0f37a412160fdb2b3468cc70cf 100644 --- a/elf/dl-lookup.c +++ b/elf/dl-lookup.c @@ -346,12 +346,12 @@ do_lookup_x (const char *undef_name, unsigned int new_hash, const struct r_found_version *const version, int flags, struct link_map *skip, int type_class, struct link_map *undef_map) { - size_t n = scope->r_nlist; - /* Make sure we read the value before proceeding. Otherwise we + /* Make sure we read r_nlist before r_list, or otherwise we might use r_list pointing to the initial scope and r_nlist being the value after a resize. That is the only path in dl-open.c not - protected by GSCOPE. A read barrier here might be to expensive. */ - __asm volatile ("" : "+r" (n), "+m" (scope->r_list)); + protected by GSCOPE. This works if all updates also use a store- + release or release barrier. */ + size_t n = atomic_load_acquire (&scope->r_nlist); struct link_map **list = scope->r_list; do @@ -528,15 +528,13 @@ add_dependency (struct link_map *undef_map, struct link_map *map, int flags) if (is_nodelete (map, flags)) return 0; - struct link_map_reldeps *l_reldeps - = atomic_forced_read (undef_map->l_reldeps); - /* Make sure l_reldeps is read before l_initfini. */ - atomic_read_barrier (); + struct link_map_reldeps *l_reldeps + = atomic_load_acquire (&undef_map->l_reldeps); /* Determine whether UNDEF_MAP already has a reference to MAP. First look in the normal dependencies. */ - struct link_map **l_initfini = atomic_forced_read (undef_map->l_initfini); + struct link_map **l_initfini = undef_map->l_initfini; if (l_initfini != NULL) { for (i = 0; l_initfini[i] != NULL; ++i) @@ -570,7 +568,7 @@ add_dependency (struct link_map *undef_map, struct link_map *map, int flags) it can e.g. point to unallocated memory. So avoid the optimizer treating the above read from MAP->l_serial as ensurance it can safely dereference it. */ - map = atomic_forced_read (map); + __asm ("" : "=r" (map) : "0" (map)); /* From this point on it is unsafe to dereference MAP, until it has been found in one of the lists. */ diff --git a/include/atomic.h b/include/atomic.h index 53bbf0423344ceda6cf98653ffa90e8d4f5d81aa..8eb56362ba18eb4836070930d5f2e769fb6a0a1e 100644 --- a/include/atomic.h +++ b/include/atomic.h @@ -119,11 +119,6 @@ #endif -#ifndef atomic_forced_read -# define atomic_forced_read(x) \ - ({ __typeof (x) __x; __asm ("" : "=r" (__x) : "0" (x)); __x; }) -#endif - /* This is equal to 1 iff the architecture supports 64b atomic operations. */ #ifndef __HAVE_64B_ATOMICS #error Unable to determine if 64-bit atomics are present. diff --git a/malloc/malloc-debug.c b/malloc/malloc-debug.c index 43604ac2641e2b80eb0e4f20747af895ab2e6d55..4e56ff71f0fd1895c770f58667db93c0372a5aee 100644 --- a/malloc/malloc-debug.c +++ b/malloc/malloc-debug.c @@ -168,7 +168,7 @@ static mchunkptr dumped_main_arena_end; /* Exclusive. */ static void * __debug_malloc (size_t bytes) { - void *(*hook) (size_t, const void *) = atomic_forced_read (__malloc_hook); + void *(*hook) (size_t, const void *) = __malloc_hook; if (__builtin_expect (hook != NULL, 0)) return (*hook)(bytes, RETURN_ADDRESS (0)); @@ -192,7 +192,7 @@ strong_alias (__debug_malloc, malloc) static void __debug_free (void *mem) { - void (*hook) (void *, const void *) = atomic_forced_read (__free_hook); + void (*hook) (void *, const void *) = __free_hook; if (__builtin_expect (hook != NULL, 0)) { (*hook)(mem, RETURN_ADDRESS (0)); @@ -216,8 +216,7 @@ strong_alias (__debug_free, free) static void * __debug_realloc (void *oldmem, size_t bytes) { - void *(*hook) (void *, size_t, const void *) = - atomic_forced_read (__realloc_hook); + void *(*hook) (void *, size_t, const void *) = __realloc_hook; if (__builtin_expect (hook != NULL, 0)) return (*hook)(oldmem, bytes, RETURN_ADDRESS (0)); @@ -270,8 +269,7 @@ strong_alias (__debug_realloc, realloc) static void * _debug_mid_memalign (size_t alignment, size_t bytes, const void *address) { - void *(*hook) (size_t, size_t, const void *) = - atomic_forced_read (__memalign_hook); + void *(*hook) (size_t, size_t, const void *) = __memalign_hook; if (__builtin_expect (hook != NULL, 0)) return (*hook)(alignment, bytes, address); @@ -363,7 +361,7 @@ __debug_calloc (size_t nmemb, size_t size) return NULL; } - void *(*hook) (size_t, const void *) = atomic_forced_read (__malloc_hook); + void *(*hook) (size_t, const void *) = __malloc_hook; if (__builtin_expect (hook != NULL, 0)) { void *mem = (*hook)(bytes, RETURN_ADDRESS (0)); diff --git a/nptl/pthread_sigqueue.c b/nptl/pthread_sigqueue.c index 48dc3ca4ee5673b7b4b2543b823d6e9354ae9849..f4149fb1779eacea0ead107f7bfce32b22114f3b 100644 --- a/nptl/pthread_sigqueue.c +++ b/nptl/pthread_sigqueue.c @@ -33,7 +33,7 @@ __pthread_sigqueue (pthread_t threadid, int signo, const union sigval value) /* Force load of pd->tid into local variable or register. Otherwise if a thread exits between ESRCH test and tgkill, we might return EINVAL, because pd->tid would be cleared by the kernel. */ - pid_t tid = atomic_forced_read (pd->tid); + pid_t tid = atomic_load_relaxed (&pd->tid); if (__glibc_unlikely (tid <= 0)) /* Not a valid thread handle. */ return ESRCH; diff --git a/nscd/nscd_helper.c b/nscd/nscd_helper.c index fc41bfdb6eebb880d6132ea5cf409ca657570f82..7a9a49955691e15079a94b78a12f9efed381ecb5 100644 --- a/nscd/nscd_helper.c +++ b/nscd/nscd_helper.c @@ -454,7 +454,6 @@ __nscd_cache_search (request_type type, const char *key, size_t keylen, size_t datasize = mapped->datasize; ref_t trail = mapped->head->array[hash]; - trail = atomic_forced_read (trail); ref_t work = trail; size_t loop_cnt = datasize / (MINIMUM_HASHENTRY_SIZE + offsetof (struct datahead, data) / 2); @@ -465,32 +464,29 @@ __nscd_cache_search (request_type type, const char *key, size_t keylen, struct hashentry *here = (struct hashentry *) (mapped->data + work); ref_t here_key, here_packet; -#if !_STRING_ARCH_unaligned /* Although during garbage collection when moving struct hashentry records around we first copy from old to new location and then adjust pointer from previous hashentry to it, there is no barrier - between those memory writes. It is very unlikely to hit it, - so check alignment only if a misaligned load can crash the - application. */ + between those memory writes!!! This is extremely risky on any + modern CPU which can reorder memory accesses very aggressively. + Check alignment, both as a partial consistency check and to avoid + crashes on targets which require atomic loads to be aligned. */ if ((uintptr_t) here & (__alignof__ (*here) - 1)) return NULL; -#endif if (type == here->type && keylen == here->len - && (here_key = atomic_forced_read (here->key)) + keylen <= datasize + && (here_key = atomic_load_relaxed (&here->key)) + keylen <= datasize && memcmp (key, mapped->data + here_key, keylen) == 0 - && ((here_packet = atomic_forced_read (here->packet)) + && ((here_packet = atomic_load_relaxed (&here->packet)) + sizeof (struct datahead) <= datasize)) { /* We found the entry. Increment the appropriate counter. */ struct datahead *dh = (struct datahead *) (mapped->data + here_packet); -#if !_STRING_ARCH_unaligned if ((uintptr_t) dh & (__alignof__ (*dh) - 1)) return NULL; -#endif /* See whether we must ignore the entry or whether something is wrong because garbage collection is in progress. */ @@ -501,7 +497,7 @@ __nscd_cache_search (request_type type, const char *key, size_t keylen, return dh; } - work = atomic_forced_read (here->next); + work = atomic_load_relaxed (&here->next); /* Prevent endless loops. This should never happen but perhaps the database got corrupted, accidentally or deliberately. */ if (work == trail || loop_cnt-- == 0) @@ -511,16 +507,14 @@ __nscd_cache_search (request_type type, const char *key, size_t keylen, struct hashentry *trailelem; trailelem = (struct hashentry *) (mapped->data + trail); -#if !_STRING_ARCH_unaligned /* We have to redo the checks. Maybe the data changed. */ if ((uintptr_t) trailelem & (__alignof__ (*trailelem) - 1)) return NULL; -#endif if (trail + MINIMUM_HASHENTRY_SIZE > datasize) return NULL; - trail = atomic_forced_read (trailelem->next); + trail = atomic_load_relaxed (&trailelem->next); } tick = 1 - tick; }