From patchwork Tue Aug 18 15:29:29 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mikael Morin X-Patchwork-Id: 508352 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 666F5140293 for ; Wed, 19 Aug 2015 01:29:56 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=gi3dCD7z; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :subject:to:references:cc:message-id:date:mime-version :in-reply-to:content-type; q=dns; s=default; b=G7zec3e3e8CpJ6six FOvhqn1K0tkhsBmXLYbbL/Cfyxl7PqXZHmMrLWq1NnXfycEJDF4b6wazUTTziAf8 53U8wX+0ki0WSLl5UuqqE+i57/d9guiCgeWKQ3csh8xT9B11fpOABrswUVoFD71T mpuDXHockDcb59iiMmTov7OJ0o= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :subject:to:references:cc:message-id:date:mime-version :in-reply-to:content-type; s=default; bh=0JwlbpUj/jyUxXVWtUmJCLe pP1E=; b=gi3dCD7zmwvoB5SgVQfjylQ/S1rsggMMvAygPeh0jgeAvG4Bmc4/MY3 Xg1panurx5VAekks/qHVJ/D/Z6AnF03GIC5hRwCTsDRHdO9N5SLx+fo1vX2+cKp8 JwlGKGisX75E2/u3IIaZKgZmNXwsTMpD/Dtfwg/+mP3mWsrYa+W8= Received: (qmail 45086 invoked by alias); 18 Aug 2015 15:29:49 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 45077 invoked by uid 89); 18 Aug 2015 15:29:48 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.4 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, SPF_PASS autolearn=ham version=3.3.2 X-HELO: smtp22.services.sfr.fr Received: from smtp22.services.sfr.fr (HELO smtp22.services.sfr.fr) (93.17.128.10) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-SHA encrypted) ESMTPS; Tue, 18 Aug 2015 15:29:47 +0000 Received: from filter.sfr.fr (localhost [86.72.15.180]) by msfrf2218.sfr.fr (SMTP Server) with ESMTP id 788CF70000C8; Tue, 18 Aug 2015 17:29:43 +0200 (CEST) Authentication-Results: sfrmc.priv.atos.fr; dkim=none (no signature); dkim-adsp=none (no policy) header.from=mikael.morin@sfr.fr Received: from [192.168.1.85] (180.15.72.86.rev.sfr.net [86.72.15.180]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by msfrf2218.sfr.fr (SMTP Server) with ESMTP id A130670000BC; Tue, 18 Aug 2015 17:29:42 +0200 (CEST) X-SFR-UUID: 20150818152942660.A130670000BC@msfrf2218.sfr.fr From: Mikael Morin Subject: Re: [Patch] sext_hwi: Avoid left shift of negative value undefined behaviour To: Richard Biener , Mike Stump References: <55C33636.7020907@sfr.fr> <55CA51C4.3050601@redhat.com> <20150812110724.GB403@x4> <55CB4B40.6040902@sfr.fr> <55CB7C0C.7030203@redhat.com> <55CB8B51.5080908@redhat.com> <21C33A3D-9976-4632-ACCC-082F2D618ED5@gmail.com> <55CB9343.6000103@redhat.com> <87k2t06zkk.fsf@googlemail.com> <2E5F6175-B03B-491D-A71A-A44A59412140@comcast.net> <01E9AA6D-9BDF-4DE5-B9EB-19D193342E77@comcast.net> Cc: Richard Sandiford , Jeff Law , Markus Trippelsdorf , gcc-patches Message-ID: <55D34F59.7020307@sfr.fr> Date: Tue, 18 Aug 2015 17:29:29 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: X-IsSubscribed: yes Le 14/08/2015 09:29, Richard Biener a écrit : > On Thu, Aug 13, 2015 at 7:47 PM, Mike Stump wrote: >> On Aug 13, 2015, at 3:05 AM, Richard Biener wrote: >>> Ok, then guard the << >> with __GCC__ and do the expensive bit stuff >>> otherwise. Just to cater for other host compilers doing sth unsensibly >>> implementation defined. >> I haven't found __GCC__ used anywhere in the source, but I have found __GNUC__ both in the source and the documentation, so I have used that. >> Ick. The guard should be specific to the implementation defined semantic or undefined semantic, and then when compiling with gcc, we turn it on. My take is that when we do this, we should add the 5 or 10 other most popular compilers to the list of how they behave so that they all do the cheap path code as well. If the language standard were serious in this area, it would specify a header file that can define the implementation defined and undefined semantics that are interesting for portable code. It isn’t. If it were, we would then just use the standard defined guards. > I have used the __GNUC__ macro instead of a direct feature test or a list of the 10 most common compilers, to avoid the #else branch being dead basically everywhere. Maybe the #else branch should just be dropped. > GCC is always used to build stage2+ so I don't see the need to handle > alternate host compilers. > Bootstrapped and regression tested on x86_64-pc-linux-gnu. Also bootstrapped with clang, but I don't think it means anything. What do you think, OK? Mikael 2015-08-18 Mikael Morin PR other/67042 * hwint.h (sext_hwi): Switch to unsigned for the left shift, and conditionalize the whole on __GNUC__. Add fallback code depending neither on undefined nor implementation-defined behaviour. diff --git a/gcc/hwint.h b/gcc/hwint.h index 3793986..4acbf8e 100644 --- a/gcc/hwint.h +++ b/gcc/hwint.h @@ -244,11 +244,27 @@ sext_hwi (HOST_WIDE_INT src, unsigned int prec) if (prec == HOST_BITS_PER_WIDE_INT) return src; else +#if defined (__GNUC__) { + /* Take the faster path if the implementation-defined bits it's relying + on are implemented the way we expect them to be. Namely, conversion + from unsigned to signed preserves bit pattern, and right shift of + a signed value propagates the sign bit. + We have to convert from signed to unsigned and back, because when left + shifting signed values, any overflow is undefined behaviour. */ gcc_checking_assert (prec < HOST_BITS_PER_WIDE_INT); int shift = HOST_BITS_PER_WIDE_INT - prec; - return (src << shift) >> shift; + return ((HOST_WIDE_INT) ((unsigned HOST_WIDE_INT) src << shift)) >> shift; } +#else + { + /* Fall back to the slower, well defined path otherwise. */ + gcc_checking_assert (prec < HOST_BITS_PER_WIDE_INT); + HOST_WIDE_INT sign_mask = HOST_WIDE_INT_1 << (prec - 1); + HOST_WIDE_INT value_mask = (HOST_WIDE_INT_1U << prec) - HOST_WIDE_INT_1U; + return (((src & value_mask) ^ sign_mask) - sign_mask); + } +#endif } /* Zero extend SRC starting from PREC. */