From patchwork Thu Nov 10 03:57:30 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joey Ye X-Patchwork-Id: 124772 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 8BDFD1007D1 for ; Thu, 10 Nov 2011 14:58:03 +1100 (EST) Received: (qmail 15294 invoked by alias); 10 Nov 2011 03:58:01 -0000 Received: (qmail 15284 invoked by uid 22791); 10 Nov 2011 03:58:00 -0000 X-SWARE-Spam-Status: No, hits=-1.9 required=5.0 tests=AWL, BAYES_00, MSGID_MULTIPLE_AT, RCVD_IN_DNSWL_LOW, TW_OV, TW_VH X-Spam-Check-By: sourceware.org Received: from service87.mimecast.com (HELO service87.mimecast.com) (91.220.42.44) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 10 Nov 2011 03:57:44 +0000 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Thu, 10 Nov 2011 03:57:42 +0000 Received: from E103005 ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 10 Nov 2011 03:57:38 +0000 From: "Joey Ye" To: , Subject: [PATCH] Incorrect volatile bitfield load-extend Date: Thu, 10 Nov 2011 11:57:30 +0800 Message-ID: <000401cc9f5c$dfff4d40$9ffde7c0$@ye@arm.com> MIME-Version: 1.0 X-MC-Unique: 111111003574200301 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Trunk gcc mis-handles following volatile bitfield case on ARM target: $ cat a.c extern void check(int); typedef struct { volatile unsigned short a:8, b:8; } BitStruct; BitStruct bits = {1, 2}; int main () { check(bits.a); return 0; } $ arm-none-eabi-gcc -v 2>&1 |grep "gcc version" gcc version 4.7.0 20111024 (experimental) [trunk revision 180388] (GCC) $ arm-none-eabi-gcc -S a.c -mthumb -mcpu=cortex-m3 $ grep -v "^\s*\." a.s bits: main: @ args = 0, pretend = 0, frame = 0 @ frame_needed = 1, uses_anonymous_args = 0 push {r7, lr} add r7, sp, #0 movw r3, #:lower16:bits movt r3, #:upper16:bits ldrh r3, [r3, #0] @ movhi uxth r3, r3 // Should be uxtb here. As a result, // the output becomes 2056, instead of 8 as expected mov r0, r3 bl check mov r3, #0 mov r0, r3 pop {r7, pc} Root cause is that when restrict-volatile-bitfields is enabled, which is ARM default. Volatile bitfields isn't loaded as bitfield even if bitfield size is less than load size. It is actually a regression introduced by: http://gcc.gnu.org/ml/gcc-patches/2011-03/msg01477.html Patch to fix this: 2011-11-10 Joey Ye Fix volatile bitfield load * gcc/expr.c (expand_expr_real_1): Check bitfield size smaller than mode size. Testcase: 2011-11-10 Joey Ye * gcc.dg/volatile-bitfields-1.c: New. || (mode1 != BLKmode Index: testsuite/gcc.dg/volatile-bitfields-1.c =================================================================== --- testsuite/gcc.dg/volatile-bitfields-1.c (revision 0) +++ testsuite/gcc.dg/volatile-bitfields-1.c (revision 0) @@ -0,0 +1,23 @@ +/* { dg-options "-fstrict-volatile-bitfields" } */ +/* { dg-do run } */ + +extern int puts(const char *); +extern void abort(void) __attribute__((noreturn)); + +typedef struct { + volatile unsigned short a:8, b:8; +} BitStruct; + +BitStruct bits = {1, 2}; + +void check(int i, int j) +{ + if (i != 1 || j != 2) puts("FAIL"), abort(); +} + +int main () +{ + check(bits.a, bits.b); + + return 0; +} Index: expr.c =================================================================== --- expr.c (revision 181191) +++ expr.c (working copy) @@ -9740,11 +9740,16 @@ && modifier != EXPAND_CONST_ADDRESS && modifier != EXPAND_INITIALIZER) /* If the field is volatile, we always want an aligned - access. Only do this if the access is not already naturally + access. Do this in following two situations: + 1. the access is not already naturally aligned, otherwise "normal" (non-bitfield) volatile fields - become non-addressable. */ + become non-addressable. + 2. the bitsize is narrower than the access size. Need + to extract bitfields from the access. */ || (volatilep && flag_strict_volatile_bitfields > 0 - && (bitpos % GET_MODE_ALIGNMENT (mode) != 0)) + && (bitpos % GET_MODE_ALIGNMENT (mode) != 0 + || (mode1 != BLKmode + && bitsize < GET_MODE_SIZE (mode1) * BITS_PER_UNIT))) /* If the field isn't aligned enough to fetch as a memref, fetch it as a bit field. */