From patchwork Fri Nov 13 09:50:07 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stefan Liebler X-Patchwork-Id: 544134 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 111D714142A for ; Fri, 13 Nov 2015 20:51:54 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.b=nUIniF64; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:to:from:subject:date:message-id:mime-version :content-type; q=dns; s=default; b=ILWE9Np94uADf5xepaO3nNhdYACqA I72UZRmVmZTP6phY23A23jy/pMXVYGecAAeBzZp65i9oV7zhrF+juRUyxRrc6AHB HpX8Un0b0Nk0sezEP8vwXdFcdllcyEbXSZo/pw/nbm3neE0ozxzTB1JyxGgB1jAH ST1P3MEKP7KNsU= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:to:from:subject:date:message-id:mime-version :content-type; s=default; bh=nTjyGUAlsmTkSrzWeI57eIB7F20=; b=nUI niF64wuRUvFZbW1Nia3Dlbo4VzP5t03vwXV3c0Advg2h/BYGJT7wKgmTKV/H2SCa 7Vq2pwDWB6Zaq+lmIatCr0yIxlPaA7FLccVN1nfkjjGmcJOoSnWg6J1+C5PjO70d 3QjjW+it0l4WBNyOu/207cef8sYIoISOEHTVyuP0= Received: (qmail 22612 invoked by alias); 13 Nov 2015 09:51:46 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 22553 invoked by uid 89); 13 Nov 2015 09:51:44 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.4 required=5.0 tests=AWL, BAYES_50, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, SPF_HELO_PASS, SPF_PASS autolearn=ham version=3.3.2 X-HELO: plane.gmane.org To: libc-alpha@sourceware.org From: Stefan Liebler Subject: [PATCH] S390: Use __asm__ instead of asm. Date: Fri, 13 Nov 2015 10:50:07 +0100 Lines: 1710 Message-ID: Mime-Version: 1.0 X-Mozilla-News-Host: news://news.gmane.org:119 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 Hi, while building testcases math/test-signgam-finite-c11|99.c with -std=c11|c99 I got failures like: ../sysdeps/s390/fpu/bits/mathinline.h: In function ‘__ieee754_sqrt’: ../sysdeps/s390/fpu/bits/mathinline.h:74:3: error: implicit declaration of function ‘asm’ [-Werror=implicit-function-declaration] asm ( "sqdbr %0,%1" : "=f" (res) : "f" (x) ); ^ ../sysdeps/s390/fpu/bits/mathinline.h:74:23: error: expected ‘)’ before ‘:’ token asm ( "sqdbr %0,%1" : "=f" (res) : "f" (x) ); The "asm" statements in mathinline.h have to be replaced by "__asm__" to get rid of this failure. This patch replaces all occurences of asm in s390 specific folders sysdeps/s390 and sysdeps/unix/sysv/linux/s390. Furthermore due to consistency, the same applies to "asm volatile" - replaced by "__asm__ __volatile__". Nothing else is changed. Ok to commit? Bye Stefan --- 2015-11-13 Stefan Liebler * sysdeps/s390/fpu/bits/mathinline.h: Use __asm__ [__volatile__] instead of asm [volatile]. * sysdeps/s390/abort-instr.h: Likewise. * sysdeps/s390/atomic-machine.h: Likewise. * sysdeps/s390/bits/string.h: Likewise. * sysdeps/s390/dl-tls.h: Likewise. * sysdeps/s390/fpu/e_sqrt.c: Likewise. * sysdeps/s390/fpu/e_sqrtf.c: Likewise. * sysdeps/s390/fpu/e_sqrtl.c: Likewise. * sysdeps/s390/fpu/fesetround.c: Likewise. * sysdeps/s390/fpu/fpu_control.h: Likewise. * sysdeps/s390/fpu/s_fma.c: Likewise. * sysdeps/s390/fpu/s_fmaf.c: Likewise. * sysdeps/s390/memusage.h: Likewise. * sysdeps/s390/multiarch/ifunc-resolve.h: Likewise. * sysdeps/s390/nptl/pthread_spin_lock.c: Likewise. * sysdeps/s390/nptl/pthread_spin_trylock.c: Likewise. * sysdeps/s390/nptl/pthread_spin_unlock.c: Likewise. * sysdeps/s390/nptl/tls.h: Likewise. * sysdeps/s390/s390-32/__longjmp.c: Likewise. * sysdeps/s390/s390-32/backtrace.c: Likewise. * sysdeps/s390/s390-32/dl-machine.h: Likewise. * sysdeps/s390/s390-32/multiarch/memcmp.c: Likewise. * sysdeps/s390/s390-32/stackguard-macros.h: Likewise. * sysdeps/s390/s390-32/tls-macros.h: Likewise. * sysdeps/s390/s390-64/__longjmp.c: Likewise. * sysdeps/s390/s390-64/backtrace.c: Likewise. * sysdeps/s390/s390-64/dl-machine.h: Likewise. * sysdeps/s390/s390-64/iso-8859-1_cp037_z900.c: Likewise. * sysdeps/s390/s390-64/multiarch/memcmp.c: Likewise. * sysdeps/s390/s390-64/stackguard-macros.h: Likewise. * sysdeps/s390/s390-64/tls-macros.h: Likewise. * sysdeps/s390/s390-64/utf16-utf32-z9.c: Likewise. * sysdeps/s390/s390-64/utf8-utf16-z9.c: Likewise. * sysdeps/s390/s390-64/utf8-utf32-z9.c: Likewise. * sysdeps/unix/sysv/linux/s390/brk.c: Likewise. * sysdeps/unix/sysv/linux/s390/elision-trylock.c: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/____longjmp_chk.c: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/sysdep.h: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/____longjmp_chk.c: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/sysdep.h: Likewise. * sysdeps/unix/sysv/linux/s390/sysconf.c: Likewise. diff --git a/sysdeps/s390/abort-instr.h b/sysdeps/s390/abort-instr.h index 6544b2d..825601a 100644 --- a/sysdeps/s390/abort-instr.h +++ b/sysdeps/s390/abort-instr.h @@ -1,2 +1,2 @@ /* An op-code of 0 should crash any program. */ -#define ABORT_INSTRUCTION asm (".word 0") +#define ABORT_INSTRUCTION __asm__ (".word 0") diff --git a/sysdeps/s390/atomic-machine.h b/sysdeps/s390/atomic-machine.h index 16c8c54..43d0b09 100644 --- a/sysdeps/s390/atomic-machine.h +++ b/sysdeps/s390/atomic-machine.h @@ -55,9 +55,9 @@ typedef uintmax_t uatomic_max_t; #define __arch_compare_and_exchange_val_32_acq(mem, newval, oldval) \ ({ __typeof (mem) __archmem = (mem); \ __typeof (*mem) __archold = (oldval); \ - __asm __volatile ("cs %0,%2,%1" \ - : "+d" (__archold), "=Q" (*__archmem) \ - : "d" (newval), "m" (*__archmem) : "cc", "memory" ); \ + __asm__ __volatile__ ("cs %0,%2,%1" \ + : "+d" (__archold), "=Q" (*__archmem) \ + : "d" (newval), "m" (*__archmem) : "cc", "memory" ); \ __archold; }) #ifdef __s390x__ @@ -65,9 +65,9 @@ typedef uintmax_t uatomic_max_t; # define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) \ ({ __typeof (mem) __archmem = (mem); \ __typeof (*mem) __archold = (oldval); \ - __asm __volatile ("csg %0,%2,%1" \ - : "+d" (__archold), "=Q" (*__archmem) \ - : "d" ((long) (newval)), "m" (*__archmem) : "cc", "memory" ); \ + __asm__ __volatile__ ("csg %0,%2,%1" \ + : "+d" (__archold), "=Q" (*__archmem) \ + : "d" ((long) (newval)), "m" (*__archmem) : "cc", "memory" ); \ __archold; }) #else # define __HAVE_64B_ATOMICS 0 @@ -89,17 +89,17 @@ typedef uintmax_t uatomic_max_t; __typeof (*(mem)) __atg5_oldval = *__atg5_memp; \ __typeof (*(mem)) __atg5_value = (newvalue); \ if (sizeof (*mem) == 4) \ - __asm __volatile ("0: cs %0,%2,%1\n" \ - " jl 0b" \ - : "+d" (__atg5_oldval), "=Q" (*__atg5_memp) \ - : "d" (__atg5_value), "m" (*__atg5_memp) \ - : "cc", "memory" ); \ + __asm__ __volatile__ ("0: cs %0,%2,%1\n" \ + " jl 0b" \ + : "+d" (__atg5_oldval), "=Q" (*__atg5_memp) \ + : "d" (__atg5_value), "m" (*__atg5_memp) \ + : "cc", "memory" ); \ else if (sizeof (*mem) == 8) \ - __asm __volatile ("0: csg %0,%2,%1\n" \ - " jl 0b" \ - : "+d" ( __atg5_oldval), "=Q" (*__atg5_memp) \ - : "d" ((long) __atg5_value), "m" (*__atg5_memp) \ - : "cc", "memory" ); \ + __asm__ __volatile__ ("0: csg %0,%2,%1\n" \ + " jl 0b" \ + : "+d" ( __atg5_oldval), "=Q" (*__atg5_memp) \ + : "d" ((long) __atg5_value), "m" (*__atg5_memp) \ + : "cc", "memory" ); \ else \ abort (); \ __atg5_oldval; }) @@ -109,11 +109,11 @@ typedef uintmax_t uatomic_max_t; __typeof (*(mem)) __atg5_oldval = *__atg5_memp; \ __typeof (*(mem)) __atg5_value = (newvalue); \ if (sizeof (*mem) == 4) \ - __asm __volatile ("0: cs %0,%2,%1\n" \ - " jl 0b" \ - : "+d" (__atg5_oldval), "=Q" (*__atg5_memp) \ - : "d" (__atg5_value), "m" (*__atg5_memp) \ - : "cc", "memory" ); \ + __asm__ __volatile__ ("0: cs %0,%2,%1\n" \ + " jl 0b" \ + : "+d" (__atg5_oldval), "=Q" (*__atg5_memp) \ + : "d" (__atg5_value), "m" (*__atg5_memp) \ + : "cc", "memory" ); \ else \ abort (); \ __atg5_oldval; }) diff --git a/sysdeps/s390/bits/string.h b/sysdeps/s390/bits/string.h index b48d511..d99039e 100644 --- a/sysdeps/s390/bits/string.h +++ b/sysdeps/s390/bits/string.h @@ -64,7 +64,7 @@ __strlen_g (const char *__str) #ifndef _FORCE_INLINES #define strcpy(dest, src) __strcpy_g ((dest), (src)) -__STRING_INLINE char *__strcpy_g (char *, const char *) __asm ("strcpy"); +__STRING_INLINE char *__strcpy_g (char *, const char *) __asm__ ("strcpy"); __STRING_INLINE char * __strcpy_g (char *__dest, const char *__src) diff --git a/sysdeps/s390/dl-tls.h b/sysdeps/s390/dl-tls.h index 8132b10..a3dd9f8 100644 --- a/sysdeps/s390/dl-tls.h +++ b/sysdeps/s390/dl-tls.h @@ -62,7 +62,7 @@ versioned_symbol (ld, __tls_get_addr_internal_tmp, the thread descriptor instead of a pointer to the variable. */ # ifdef __s390x__ -asm("\n\ +__asm__("\n\ .text\n\ .globl __tls_get_offset\n\ .type __tls_get_offset, @function\n\ @@ -72,7 +72,7 @@ __tls_get_offset:\n\ jg __tls_get_addr\n\ "); # elif defined __s390__ -asm("\n\ +__asm__("\n\ .text\n\ .globl __tls_get_offset\n\ .type __tls_get_offset, @function\n\ diff --git a/sysdeps/s390/fpu/bits/mathinline.h b/sysdeps/s390/fpu/bits/mathinline.h index 1d35746..633bd0b 100644 --- a/sysdeps/s390/fpu/bits/mathinline.h +++ b/sysdeps/s390/fpu/bits/mathinline.h @@ -71,7 +71,7 @@ __NTH (__ieee754_sqrt (double x)) { double res; - asm ( "sqdbr %0,%1" : "=f" (res) : "f" (x) ); + __asm__ ( "sqdbr %0,%1" : "=f" (res) : "f" (x) ); return res; } @@ -80,7 +80,7 @@ __NTH (__ieee754_sqrtf (float x)) { float res; - asm ( "sqebr %0,%1" : "=f" (res) : "f" (x) ); + __asm__ ( "sqebr %0,%1" : "=f" (res) : "f" (x) ); return res; } @@ -90,7 +90,7 @@ __NTH (sqrtl (long double __x)) { long double res; - asm ( "sqxbr %0,%1" : "=f" (res) : "f" (__x) ); + __asm__ ( "sqxbr %0,%1" : "=f" (res) : "f" (__x) ); return res; } # endif /* !__NO_LONG_DOUBLE_MATH */ diff --git a/sysdeps/s390/fpu/e_sqrt.c b/sysdeps/s390/fpu/e_sqrt.c index 3567562..f8a8a65 100644 --- a/sysdeps/s390/fpu/e_sqrt.c +++ b/sysdeps/s390/fpu/e_sqrt.c @@ -23,7 +23,7 @@ __ieee754_sqrt (double x) { double res; - asm ( "sqdbr %0,%1" : "=f" (res) : "f" (x) ); + __asm__ ( "sqdbr %0,%1" : "=f" (res) : "f" (x) ); return res; } strong_alias (__ieee754_sqrt, __sqrt_finite) diff --git a/sysdeps/s390/fpu/e_sqrtf.c b/sysdeps/s390/fpu/e_sqrtf.c index 3fdd74f..bf3a47b 100644 --- a/sysdeps/s390/fpu/e_sqrtf.c +++ b/sysdeps/s390/fpu/e_sqrtf.c @@ -23,7 +23,7 @@ __ieee754_sqrtf (float x) { float res; - asm ( "sqebr %0,%1" : "=f" (res) : "f" (x) ); + __asm__ ( "sqebr %0,%1" : "=f" (res) : "f" (x) ); return res; } strong_alias (__ieee754_sqrtf, __sqrtf_finite) diff --git a/sysdeps/s390/fpu/e_sqrtl.c b/sysdeps/s390/fpu/e_sqrtl.c index b5215a9..06d92b4 100644 --- a/sysdeps/s390/fpu/e_sqrtl.c +++ b/sysdeps/s390/fpu/e_sqrtl.c @@ -24,7 +24,7 @@ __ieee754_sqrtl (long double x) { long double res; - asm ( "sqxbr %0,%1" : "=f" (res) : "f" (x) ); + __asm__ ( "sqxbr %0,%1" : "=f" (res) : "f" (x) ); return res; } strong_alias (__ieee754_sqrtl, __sqrtl_finite) diff --git a/sysdeps/s390/fpu/fesetround.c b/sysdeps/s390/fpu/fesetround.c index d6eedce..5babb9a 100644 --- a/sysdeps/s390/fpu/fesetround.c +++ b/sysdeps/s390/fpu/fesetround.c @@ -28,9 +28,9 @@ __fesetround (int round) /* ROUND is not a valid rounding mode. */ return 1; } - __asm__ volatile ("srnm 0(%0)" - : - : "a" (round)); + __asm__ __volatile__ ("srnm 0(%0)" + : + : "a" (round)); return 0; } diff --git a/sysdeps/s390/fpu/fpu_control.h b/sysdeps/s390/fpu/fpu_control.h index 1f663b3..f87c32e 100644 --- a/sysdeps/s390/fpu/fpu_control.h +++ b/sysdeps/s390/fpu/fpu_control.h @@ -34,8 +34,8 @@ typedef unsigned int fpu_control_t; /* Macros for accessing the hardware control word. */ -#define _FPU_GETCW(cw) __asm__ volatile ("efpc %0,0" : "=d" (cw)) -#define _FPU_SETCW(cw) __asm__ volatile ("sfpc %0,0" : : "d" (cw)) +#define _FPU_GETCW(cw) __asm__ __volatile__ ("efpc %0,0" : "=d" (cw)) +#define _FPU_SETCW(cw) __asm__ __volatile__ ("sfpc %0,0" : : "d" (cw)) /* Default control word set at startup. */ extern fpu_control_t __fpu_control; diff --git a/sysdeps/s390/fpu/s_fma.c b/sysdeps/s390/fpu/s_fma.c index fbfeea4..63d5e9a 100644 --- a/sysdeps/s390/fpu/s_fma.c +++ b/sysdeps/s390/fpu/s_fma.c @@ -23,7 +23,7 @@ double __fma (double x, double y, double z) { double r; - asm ("madbr %0,%1,%2" : "=f" (r) : "%f" (x), "fR" (y), "0" (z)); + __asm__ ("madbr %0,%1,%2" : "=f" (r) : "%f" (x), "fR" (y), "0" (z)); return r; } #ifndef __fma diff --git a/sysdeps/s390/fpu/s_fmaf.c b/sysdeps/s390/fpu/s_fmaf.c index f65c73e..8d15870 100644 --- a/sysdeps/s390/fpu/s_fmaf.c +++ b/sysdeps/s390/fpu/s_fmaf.c @@ -23,7 +23,7 @@ float __fmaf (float x, float y, float z) { float r; - asm ("maebr %0,%1,%2" : "=f" (r) : "%f" (x), "fR" (y), "0" (z)); + __asm__ ("maebr %0,%1,%2" : "=f" (r) : "%f" (x), "fR" (y), "0" (z)); return r; } #ifndef __fmaf diff --git a/sysdeps/s390/memusage.h b/sysdeps/s390/memusage.h index 4721b59..443e474 100644 --- a/sysdeps/s390/memusage.h +++ b/sysdeps/s390/memusage.h @@ -15,6 +15,6 @@ License along with the GNU C Library; if not, see . */ -#define GETSP() ({ register uintptr_t stack_ptr asm ("15"); stack_ptr; }) +#define GETSP() ({ register uintptr_t stack_ptr __asm__ ("15"); stack_ptr; }) #include diff --git a/sysdeps/s390/multiarch/ifunc-resolve.h b/sysdeps/s390/multiarch/ifunc-resolve.h index e9fd90e..52a1d30 100644 --- a/sysdeps/s390/multiarch/ifunc-resolve.h +++ b/sysdeps/s390/multiarch/ifunc-resolve.h @@ -31,22 +31,22 @@ #define S390_STORE_STFLE(STFLE_BITS) \ /* We want just 1 double word to be returned. */ \ - register unsigned long reg0 asm("0") = 0; \ + register unsigned long reg0 __asm__("0") = 0; \ \ - asm volatile(".machine push" "\n\t" \ - ".machine \"z9-109\"" "\n\t" \ - ".machinemode \"zarch_nohighgprs\"\n\t" \ - "stfle %0" "\n\t" \ - ".machine pop" "\n" \ - : "=QS" (STFLE_BITS), "+d" (reg0) \ - : : "cc"); + __asm__ __volatile__(".machine push" "\n\t" \ + ".machine \"z9-109\"" "\n\t" \ + ".machinemode \"zarch_nohighgprs\"\n\t" \ + "stfle %0" "\n\t" \ + ".machine pop" "\n" \ + : "=QS" (STFLE_BITS), "+d" (reg0) \ + : : "cc"); #define s390_libc_ifunc(FUNC) \ - asm (".globl " #FUNC "\n\t" \ - ".type " #FUNC ",@gnu_indirect_function\n\t" \ - ".set " #FUNC ",__resolve_" #FUNC "\n\t" \ - ".globl __GI_" #FUNC "\n\t" \ - ".set __GI_" #FUNC "," #FUNC "\n"); \ + __asm__ (".globl " #FUNC "\n\t" \ + ".type " #FUNC ",@gnu_indirect_function\n\t" \ + ".set " #FUNC ",__resolve_" #FUNC "\n\t" \ + ".globl __GI_" #FUNC "\n\t" \ + ".set __GI_" #FUNC "," #FUNC "\n"); \ \ /* Make the declarations of the optimized functions hidden in order to prevent GOT slots being generated for them. */ \ diff --git a/sysdeps/s390/nptl/pthread_spin_lock.c b/sysdeps/s390/nptl/pthread_spin_lock.c index f6bca3d..a505aff 100644 --- a/sysdeps/s390/nptl/pthread_spin_lock.c +++ b/sysdeps/s390/nptl/pthread_spin_lock.c @@ -23,10 +23,10 @@ pthread_spin_lock (pthread_spinlock_t *lock) { int oldval; - __asm __volatile ("0: lhi %0,0\n" - " cs %0,%2,%1\n" - " jl 0b" - : "=&d" (oldval), "=Q" (*lock) - : "d" (1), "m" (*lock) : "cc" ); + __asm__ __volatile__ ("0: lhi %0,0\n" + " cs %0,%2,%1\n" + " jl 0b" + : "=&d" (oldval), "=Q" (*lock) + : "d" (1), "m" (*lock) : "cc" ); return 0; } diff --git a/sysdeps/s390/nptl/pthread_spin_trylock.c b/sysdeps/s390/nptl/pthread_spin_trylock.c index 210a2c6..3bc0ded 100644 --- a/sysdeps/s390/nptl/pthread_spin_trylock.c +++ b/sysdeps/s390/nptl/pthread_spin_trylock.c @@ -24,9 +24,9 @@ pthread_spin_trylock (pthread_spinlock_t *lock) { int old; - __asm __volatile ("cs %0,%3,%1" - : "=d" (old), "=Q" (*lock) - : "0" (0), "d" (1), "m" (*lock) : "cc" ); + __asm__ __volatile__ ("cs %0,%3,%1" + : "=d" (old), "=Q" (*lock) + : "0" (0), "d" (1), "m" (*lock) : "cc" ); return old != 0 ? EBUSY : 0; } diff --git a/sysdeps/s390/nptl/pthread_spin_unlock.c b/sysdeps/s390/nptl/pthread_spin_unlock.c index a34ebcc..e3d8856 100644 --- a/sysdeps/s390/nptl/pthread_spin_unlock.c +++ b/sysdeps/s390/nptl/pthread_spin_unlock.c @@ -24,9 +24,9 @@ int pthread_spin_unlock (pthread_spinlock_t *lock) { - __asm __volatile (" xc %O0(4,%R0),%0\n" - " bcr 15,0" - : "=Q" (*lock) : "m" (*lock) : "cc" ); + __asm__ __volatile__ (" xc %O0(4,%R0),%0\n" + " bcr 15,0" + : "=Q" (*lock) : "m" (*lock) : "cc" ); return 0; } strong_alias (pthread_spin_unlock, pthread_spin_init) diff --git a/sysdeps/s390/nptl/tls.h b/sysdeps/s390/nptl/tls.h index e6f8a47..28a2119 100644 --- a/sysdeps/s390/nptl/tls.h +++ b/sysdeps/s390/nptl/tls.h @@ -159,9 +159,9 @@ typedef struct /* Set the stack guard field in TCB head. */ #define THREAD_SET_STACK_GUARD(value) \ - do \ + do \ { \ - __asm __volatile ("" : : : "a0", "a1"); \ + __asm__ __volatile__ ("" : : : "a0", "a1"); \ THREAD_SETMEM (THREAD_SELF, header.stack_guard, value); \ } \ while (0) diff --git a/sysdeps/s390/s390-32/__longjmp.c b/sysdeps/s390/s390-32/__longjmp.c index b253934..e9fe7e2 100644 --- a/sysdeps/s390/s390-32/__longjmp.c +++ b/sysdeps/s390/s390-32/__longjmp.c @@ -37,46 +37,46 @@ __longjmp (__jmp_buf env, int val) #elif defined CHECK_SP CHECK_SP (env, 0); #endif - register int r2 __asm ("%r2") = val == 0 ? 1 : val; + register int r2 __asm__ ("%r2") = val == 0 ? 1 : val; #ifdef PTR_DEMANGLE - register uintptr_t r3 __asm ("%r3") = guard; - register void *r1 __asm ("%r1") = (void *) env; + register uintptr_t r3 __asm__ ("%r3") = guard; + register void *r1 __asm__ ("%r1") = (void *) env; #endif /* Restore registers and jump back. */ - asm volatile ( + __asm__ __volatile__ ( /* longjmp probe expects longjmp first argument, second argument and target address. */ #ifdef PTR_DEMANGLE - "lm %%r4,%%r5,32(%1)\n\t" - "xr %%r4,%2\n\t" - "xr %%r5,%2\n\t" - LIBC_PROBE_ASM (longjmp, 4@%1 -4@%0 4@%%r4) + "lm %%r4,%%r5,32(%1)\n\t" + "xr %%r4,%2\n\t" + "xr %%r5,%2\n\t" + LIBC_PROBE_ASM (longjmp, 4@%1 -4@%0 4@%%r4) #else - LIBC_PROBE_ASM (longjmp, 4@%1 -4@%0 4@%%r14) + LIBC_PROBE_ASM (longjmp, 4@%1 -4@%0 4@%%r14) #endif - /* restore fpregs */ - "ld %%f6,48(%1)\n\t" - "ld %%f4,40(%1)\n\t" + /* restore fpregs */ + "ld %%f6,48(%1)\n\t" + "ld %%f4,40(%1)\n\t" - /* restore gregs and return to jmp_buf target */ + /* restore gregs and return to jmp_buf target */ #ifdef PTR_DEMANGLE - "lm %%r6,%%r13,0(%1)\n\t" - "lr %%r15,%%r5\n\t" - LIBC_PROBE_ASM (longjmp_target, 4@%1 -4@%0 4@%%r4) - "br %%r4" + "lm %%r6,%%r13,0(%1)\n\t" + "lr %%r15,%%r5\n\t" + LIBC_PROBE_ASM (longjmp_target, 4@%1 -4@%0 4@%%r4) + "br %%r4" #else - "lm %%r6,%%r15,0(%1)\n\t" - LIBC_PROBE_ASM (longjmp_target, 4@%1 -4@%0 4@%%r14) - "br %%r14" + "lm %%r6,%%r15,0(%1)\n\t" + LIBC_PROBE_ASM (longjmp_target, 4@%1 -4@%0 4@%%r14) + "br %%r14" #endif - : : "r" (r2), + : : "r" (r2), #ifdef PTR_DEMANGLE - "r" (r1), "r" (r3) + "r" (r1), "r" (r3) #else - "a" (env) + "a" (env) #endif - ); + ); /* Avoid `volatile function does return' warnings. */ for (;;); diff --git a/sysdeps/s390/s390-32/backtrace.c b/sysdeps/s390/s390-32/backtrace.c index 5b5738c..3531d31 100644 --- a/sysdeps/s390/s390-32/backtrace.c +++ b/sysdeps/s390/s390-32/backtrace.c @@ -85,7 +85,7 @@ __backchain_backtrace (void **array, int size) struct layout *stack; int cnt = 0; - asm ("LR %0,%%r15" : "=d" (stack) ); + __asm__ ("LR %0,%%r15" : "=d" (stack) ); /* We skip the call to this function, it makes no sense to record it. */ stack = (struct layout *) stack->back_chain; while (cnt < size) diff --git a/sysdeps/s390/s390-32/dl-machine.h b/sysdeps/s390/s390-32/dl-machine.h index 119e7b5..d2680a4 100644 --- a/sysdeps/s390/s390-32/dl-machine.h +++ b/sysdeps/s390/s390-32/dl-machine.h @@ -55,10 +55,10 @@ elf_machine_dynamic (void) { register Elf32_Addr *got; - asm( " bras %0,2f\n" - "1: .long _GLOBAL_OFFSET_TABLE_-1b\n" - "2: al %0,0(%0)" - : "=&a" (got) : : "0" ); + __asm__( " bras %0,2f\n" + "1: .long _GLOBAL_OFFSET_TABLE_-1b\n" + "2: al %0,0(%0)" + : "=&a" (got) : : "0" ); return *got; } @@ -70,14 +70,14 @@ elf_machine_load_address (void) { Elf32_Addr addr; - asm( " bras 1,2f\n" - "1: .long _GLOBAL_OFFSET_TABLE_ - 1b\n" - " .long (_dl_start - 1b - 0x80000000) & 0x00000000ffffffff\n" - "2: l %0,4(1)\n" - " ar %0,1\n" - " al 1,0(1)\n" - " sl %0,_dl_start@GOT(1)" - : "=&d" (addr) : : "1" ); + __asm__( " bras 1,2f\n" + "1: .long _GLOBAL_OFFSET_TABLE_ - 1b\n" + " .long (_dl_start - 1b - 0x80000000) & 0x00000000ffffffff\n" + "2: l %0,4(1)\n" + " ar %0,1\n" + " al 1,0(1)\n" + " sl %0,_dl_start@GOT(1)" + : "=&d" (addr) : : "1" ); return addr; } @@ -141,7 +141,7 @@ elf_machine_runtime_setup (struct link_map *l, int lazy, int profile) The C function `_dl_start' is the real entry point; its return value is the user program's entry point. */ -#define RTLD_START asm ("\n\ +#define RTLD_START __asm__ ("\n\ .text\n\ .align 4\n\ .globl _start\n\ diff --git a/sysdeps/s390/s390-32/multiarch/memcmp.c b/sysdeps/s390/s390-32/multiarch/memcmp.c index a3607e4..d861e0b 100644 --- a/sysdeps/s390/s390-32/multiarch/memcmp.c +++ b/sysdeps/s390/s390-32/multiarch/memcmp.c @@ -20,5 +20,5 @@ # include s390_libc_ifunc (memcmp) -asm(".weak bcmp ; bcmp = memcmp"); +__asm__(".weak bcmp ; bcmp = memcmp"); #endif diff --git a/sysdeps/s390/s390-32/stackguard-macros.h b/sysdeps/s390/s390-32/stackguard-macros.h index 449e8d4..4610974 100644 --- a/sysdeps/s390/s390-32/stackguard-macros.h +++ b/sysdeps/s390/s390-32/stackguard-macros.h @@ -1,15 +1,15 @@ #include #define STACK_CHK_GUARD \ - ({ uintptr_t x; asm ("ear %0,%%a0; l %0,0x14(%0)" : "=a" (x)); x; }) + ({ uintptr_t x; __asm__ ("ear %0,%%a0; l %0,0x14(%0)" : "=a" (x)); x; }) /* On s390/s390x there is no unique pointer guard, instead we use the same value as the stack guard. */ #define POINTER_CHK_GUARD \ - ({ \ - uintptr_t x; \ - asm ("ear %0,%%a0; l %0,%1(%0)" \ - : "=a" (x) \ - : "i" (offsetof (tcbhead_t, stack_guard))); \ - x; \ - }) + ({ \ + uintptr_t x; \ + __asm__ ("ear %0,%%a0; l %0,%1(%0)" \ + : "=a" (x) \ + : "i" (offsetof (tcbhead_t, stack_guard))); \ + x; \ + }) diff --git a/sysdeps/s390/s390-32/tls-macros.h b/sysdeps/s390/s390-32/tls-macros.h index a592d81..09b42aa 100644 --- a/sysdeps/s390/s390-32/tls-macros.h +++ b/sysdeps/s390/s390-32/tls-macros.h @@ -1,102 +1,102 @@ #define TLS_LE(x) \ ({ unsigned long __offset; \ - asm ("bras %0,1f\n" \ - "0:\t.long " #x "@ntpoff\n" \ - "1:\tl %0,0(%0)" \ - : "=a" (__offset) : : "cc" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.long " #x "@ntpoff\n" \ + "1:\tl %0,0(%0)" \ + : "=a" (__offset) : : "cc" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #ifdef PIC # define TLS_IE(x) \ ({ unsigned long __offset, __got; \ - asm ("bras %0,1f\n" \ - "0:\t.long _GLOBAL_OFFSET_TABLE_-0b\n\t" \ - ".long " #x "@gotntpoff\n" \ - "1:\tl %1,0(%0)\n\t" \ - "la %1,0(%1,%0)\n\t" \ - "l %0,4(%0)\n\t" \ - "l %0,0(%0,%1):tls_load:" #x "\n" \ - : "=&a" (__offset), "=&a" (__got) : : "cc" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.long _GLOBAL_OFFSET_TABLE_-0b\n\t" \ + ".long " #x "@gotntpoff\n" \ + "1:\tl %1,0(%0)\n\t" \ + "la %1,0(%1,%0)\n\t" \ + "l %0,4(%0)\n\t" \ + "l %0,0(%0,%1):tls_load:" #x "\n" \ + : "=&a" (__offset), "=&a" (__got) : : "cc" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #else # define TLS_IE(x) \ ({ unsigned long __offset; \ - asm ("bras %0,1f\n" \ - "0:\t.long " #x "@indntpoff\n" \ - "1:\t l %0,0(%0)\n\t" \ - "l %0,0(%0):tls_load:" #x \ - : "=&a" (__offset) : : "cc" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.long " #x "@indntpoff\n" \ + "1:\t l %0,0(%0)\n\t" \ + "l %0,0(%0):tls_load:" #x \ + : "=&a" (__offset) : : "cc" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #endif #ifdef PIC # define TLS_LD(x) \ ({ unsigned long __offset, __save12; \ - asm ("bras %0,1f\n" \ - "0:\t.long _GLOBAL_OFFSET_TABLE_-0b\n\t" \ - ".long __tls_get_offset@plt-0b\n\t" \ - ".long " #x "@tlsldm\n\t" \ - ".long " #x "@dtpoff\n" \ - "1:\tlr %1,%%r12\n\t" \ - "l %%r12,0(%0)\n\t" \ - "la %%r12,0(%%r12,%0)\n\t" \ - "l %%r1,4(%0)\n\t" \ - "l %%r2,8(%0)\n\t" \ - "bas %%r14,0(%%r1,%0):tls_ldcall:" #x "\n\t" \ - "l %0,12(%0)\n\t" \ - "alr %0,%%r2\n\t" \ - "lr %%r12,%1" \ - : "=&a" (__offset), "=&a" (__save12) \ - : : "cc", "0", "1", "2", "3", "4", "5" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.long _GLOBAL_OFFSET_TABLE_-0b\n\t" \ + ".long __tls_get_offset@plt-0b\n\t" \ + ".long " #x "@tlsldm\n\t" \ + ".long " #x "@dtpoff\n" \ + "1:\tlr %1,%%r12\n\t" \ + "l %%r12,0(%0)\n\t" \ + "la %%r12,0(%%r12,%0)\n\t" \ + "l %%r1,4(%0)\n\t" \ + "l %%r2,8(%0)\n\t" \ + "bas %%r14,0(%%r1,%0):tls_ldcall:" #x "\n\t" \ + "l %0,12(%0)\n\t" \ + "alr %0,%%r2\n\t" \ + "lr %%r12,%1" \ + : "=&a" (__offset), "=&a" (__save12) \ + : : "cc", "0", "1", "2", "3", "4", "5" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #else # define TLS_LD(x) \ ({ unsigned long __offset; \ - asm ("bras %0,1f\n" \ - "0:\t.long _GLOBAL_OFFSET_TABLE_\n\t" \ - ".long __tls_get_offset@plt\n\t" \ - ".long " #x "@tlsldm\n\t" \ - ".long " #x "@dtpoff\n" \ - "1:\tl %%r12,0(%0)\n\t" \ - "l %%r1,4(%0)\n\t" \ - "l %%r2,8(%0)\n\t" \ - "bas %%r14,0(%%r1):tls_ldcall:" #x "\n\t" \ - "l %0,12(%0)\n\t" \ - "alr %0,%%r2" \ - : "=&a" (__offset) : : "cc", "0", "1", "2", "3", "4", "5", "12" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.long _GLOBAL_OFFSET_TABLE_\n\t" \ + ".long __tls_get_offset@plt\n\t" \ + ".long " #x "@tlsldm\n\t" \ + ".long " #x "@dtpoff\n" \ + "1:\tl %%r12,0(%0)\n\t" \ + "l %%r1,4(%0)\n\t" \ + "l %%r2,8(%0)\n\t" \ + "bas %%r14,0(%%r1):tls_ldcall:" #x "\n\t" \ + "l %0,12(%0)\n\t" \ + "alr %0,%%r2" \ + : "=&a" (__offset) : : "cc", "0", "1", "2", "3", "4", "5", "12" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #endif #ifdef PIC # define TLS_GD(x) \ ({ unsigned long __offset, __save12; \ - asm ("bras %0,1f\n" \ - "0:\t.long _GLOBAL_OFFSET_TABLE_-0b\n\t" \ - ".long __tls_get_offset@plt-0b\n\t" \ - ".long " #x "@tlsgd\n" \ - "1:\tlr %1,%%r12\n\t" \ - "l %%r12,0(%0)\n\t" \ - "la %%r12,0(%%r12,%0)\n\t" \ - "l %%r1,4(%0)\n\t" \ - "l %%r2,8(%0)\n\t" \ - "bas %%r14,0(%%r1,%0):tls_gdcall:" #x "\n\t" \ - "lr %0,%%r2\n\t" \ - "lr %%r12,%1" \ - : "=&a" (__offset), "=&a" (__save12) \ - : : "cc", "0", "1", "2", "3", "4", "5" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.long _GLOBAL_OFFSET_TABLE_-0b\n\t" \ + ".long __tls_get_offset@plt-0b\n\t" \ + ".long " #x "@tlsgd\n" \ + "1:\tlr %1,%%r12\n\t" \ + "l %%r12,0(%0)\n\t" \ + "la %%r12,0(%%r12,%0)\n\t" \ + "l %%r1,4(%0)\n\t" \ + "l %%r2,8(%0)\n\t" \ + "bas %%r14,0(%%r1,%0):tls_gdcall:" #x "\n\t" \ + "lr %0,%%r2\n\t" \ + "lr %%r12,%1" \ + : "=&a" (__offset), "=&a" (__save12) \ + : : "cc", "0", "1", "2", "3", "4", "5" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #else # define TLS_GD(x) \ ({ unsigned long __offset; \ - asm ("bras %0,1f\n" \ - "0:\t.long _GLOBAL_OFFSET_TABLE_\n\t" \ - ".long __tls_get_offset@plt\n\t" \ - ".long " #x "@tlsgd\n" \ - "1:\tl %%r12,0(%0)\n\t" \ - "l %%r1,4(%0)\n\t" \ - "l %%r2,8(%0)\n\t" \ - "bas %%r14,0(%%r1):tls_gdcall:" #x "\n\t" \ - "lr %0,%%r2" \ - : "=&a" (__offset) : : "cc", "0", "1", "2", "3", "4", "5", "12" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.long _GLOBAL_OFFSET_TABLE_\n\t" \ + ".long __tls_get_offset@plt\n\t" \ + ".long " #x "@tlsgd\n" \ + "1:\tl %%r12,0(%0)\n\t" \ + "l %%r1,4(%0)\n\t" \ + "l %%r2,8(%0)\n\t" \ + "bas %%r14,0(%%r1):tls_gdcall:" #x "\n\t" \ + "lr %0,%%r2" \ + : "=&a" (__offset) : : "cc", "0", "1", "2", "3", "4", "5", "12" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #endif diff --git a/sysdeps/s390/s390-64/__longjmp.c b/sysdeps/s390/s390-64/__longjmp.c index e75e648..e962195 100644 --- a/sysdeps/s390/s390-64/__longjmp.c +++ b/sysdeps/s390/s390-64/__longjmp.c @@ -37,52 +37,52 @@ __longjmp (__jmp_buf env, int val) #elif defined CHECK_SP CHECK_SP (env, 0); #endif - register long int r2 __asm ("%r2") = val == 0 ? 1 : val; + register long int r2 __asm__ ("%r2") = val == 0 ? 1 : val; #ifdef PTR_DEMANGLE - register uintptr_t r3 __asm ("%r3") = guard; - register void *r1 __asm ("%r1") = (void *) env; + register uintptr_t r3 __asm__ ("%r3") = guard; + register void *r1 __asm__ ("%r1") = (void *) env; #endif /* Restore registers and jump back. */ - asm volatile ( - /* longjmp probe expects longjmp first argument, second - argument and target address. */ + __asm__ __volatile__ ( + /* longjmp probe expects longjmp first argument, second + argument and target address. */ #ifdef PTR_DEMANGLE - "lmg %%r4,%%r5,64(%1)\n\t" - "xgr %%r4,%2\n\t" - "xgr %%r5,%2\n\t" - LIBC_PROBE_ASM (longjmp, 8@%1 -4@%0 8@%%r4) + "lmg %%r4,%%r5,64(%1)\n\t" + "xgr %%r4,%2\n\t" + "xgr %%r5,%2\n\t" + LIBC_PROBE_ASM (longjmp, 8@%1 -4@%0 8@%%r4) #else - LIBC_PROBE_ASM (longjmp, 8@%1 -4@%0 8@%%r14) + LIBC_PROBE_ASM (longjmp, 8@%1 -4@%0 8@%%r14) #endif - /* restore fpregs */ - "ld %%f8,80(%1)\n\t" - "ld %%f9,88(%1)\n\t" - "ld %%f10,96(%1)\n\t" - "ld %%f11,104(%1)\n\t" - "ld %%f12,112(%1)\n\t" - "ld %%f13,120(%1)\n\t" - "ld %%f14,128(%1)\n\t" - "ld %%f15,136(%1)\n\t" + /* restore fpregs */ + "ld %%f8,80(%1)\n\t" + "ld %%f9,88(%1)\n\t" + "ld %%f10,96(%1)\n\t" + "ld %%f11,104(%1)\n\t" + "ld %%f12,112(%1)\n\t" + "ld %%f13,120(%1)\n\t" + "ld %%f14,128(%1)\n\t" + "ld %%f15,136(%1)\n\t" - /* restore gregs and return to jmp_buf target */ + /* restore gregs and return to jmp_buf target */ #ifdef PTR_DEMANGLE - "lmg %%r6,%%r13,0(%1)\n\t" - "lgr %%r15,%%r5\n\t" - LIBC_PROBE_ASM (longjmp_target, 8@%1 -4@%0 8@%%r4) - "br %%r4" + "lmg %%r6,%%r13,0(%1)\n\t" + "lgr %%r15,%%r5\n\t" + LIBC_PROBE_ASM (longjmp_target, 8@%1 -4@%0 8@%%r4) + "br %%r4" #else - "lmg %%r6,%%r15,0(%1)\n\t" - LIBC_PROBE_ASM (longjmp_target, 8@%1 -4@%0 8@%%r14) - "br %%r14" + "lmg %%r6,%%r15,0(%1)\n\t" + LIBC_PROBE_ASM (longjmp_target, 8@%1 -4@%0 8@%%r14) + "br %%r14" #endif - : : "r" (r2), + : : "r" (r2), #ifdef PTR_DEMANGLE - "r" (r1), "r" (r3) + "r" (r1), "r" (r3) #else - "a" (env) + "a" (env) #endif - ); + ); /* Avoid `volatile function does return' warnings. */ for (;;); diff --git a/sysdeps/s390/s390-64/backtrace.c b/sysdeps/s390/s390-64/backtrace.c index e4d7efe..6f15ce9 100644 --- a/sysdeps/s390/s390-64/backtrace.c +++ b/sysdeps/s390/s390-64/backtrace.c @@ -84,7 +84,7 @@ __backchain_backtrace (void **array, int size) struct layout *stack; int cnt = 0; - asm ("LGR %0,%%r15" : "=d" (stack) ); + __asm__ ("LGR %0,%%r15" : "=d" (stack) ); /* We skip the call to this function, it makes no sense to record it. */ stack = (struct layout *) stack->back_chain; while (cnt < size) diff --git a/sysdeps/s390/s390-64/dl-machine.h b/sysdeps/s390/s390-64/dl-machine.h index eeadbcd..0b01d2f 100644 --- a/sysdeps/s390/s390-64/dl-machine.h +++ b/sysdeps/s390/s390-64/dl-machine.h @@ -50,8 +50,8 @@ elf_machine_dynamic (void) { register Elf64_Addr *got; - asm( " larl %0,_GLOBAL_OFFSET_TABLE_\n" - : "=&a" (got) : : "0" ); + __asm__ ( " larl %0,_GLOBAL_OFFSET_TABLE_\n" + : "=&a" (got) : : "0" ); return *got; } @@ -62,11 +62,11 @@ elf_machine_load_address (void) { Elf64_Addr addr; - asm( " larl %0,_dl_start\n" - " larl 1,_GLOBAL_OFFSET_TABLE_\n" - " lghi 2,_dl_start@GOT\n" - " slg %0,0(2,1)" - : "=&d" (addr) : : "1", "2" ); + __asm__( " larl %0,_dl_start\n" + " larl 1,_GLOBAL_OFFSET_TABLE_\n" + " lghi 2,_dl_start@GOT\n" + " slg %0,0(2,1)" + : "=&d" (addr) : : "1", "2" ); return addr; } @@ -126,7 +126,7 @@ elf_machine_runtime_setup (struct link_map *l, int lazy, int profile) The C function `_dl_start' is the real entry point; its return value is the user program's entry point. */ -#define RTLD_START asm ("\n\ +#define RTLD_START __asm__ ("\n\ .text\n\ .align 4\n\ .globl _start\n\ diff --git a/sysdeps/s390/s390-64/iso-8859-1_cp037_z900.c b/sysdeps/s390/s390-64/iso-8859-1_cp037_z900.c index d020fd0..9ec84a3 100644 --- a/sysdeps/s390/s390-64/iso-8859-1_cp037_z900.c +++ b/sysdeps/s390/s390-64/iso-8859-1_cp037_z900.c @@ -184,28 +184,28 @@ __attribute__ ((aligned (8))) = #define TROO_LOOP(TABLE) \ { \ - register const unsigned char test asm ("0") = 0; \ - register const unsigned char *pTable asm ("1") = TABLE; \ - register unsigned char *pOutput asm ("2") = outptr; \ - register uint64_t length asm ("3"); \ + register const unsigned char test __asm__ ("0") = 0; \ + register const unsigned char *pTable __asm__ ("1") = TABLE; \ + register unsigned char *pOutput __asm__ ("2") = outptr; \ + register uint64_t length __asm__ ("3"); \ const unsigned char* pInput = inptr; \ uint64_t tmp; \ \ length = (inend - inptr < outend - outptr \ ? inend - inptr : outend - outptr); \ \ - asm volatile ("0: \n\t" \ - " troo %0,%1 \n\t" \ - " jz 1f \n\t" \ - " jo 0b \n\t" \ - " llgc %3,0(%1) \n\t" \ - " la %3,0(%3,%4) \n\t" \ - " mvc 0(1,%0),0(%3) \n\t" \ - " aghi %1,1 \n\t" \ - " aghi %0,1 \n\t" \ - " aghi %2,-1 \n\t" \ - " j 0b \n\t" \ - "1: \n" \ + __asm__ volatile ("0: \n\t" \ + " troo %0,%1 \n\t" \ + " jz 1f \n\t" \ + " jo 0b \n\t" \ + " llgc %3,0(%1) \n\t" \ + " la %3,0(%3,%4) \n\t" \ + " mvc 0(1,%0),0(%3) \n\t" \ + " aghi %1,1 \n\t" \ + " aghi %0,1 \n\t" \ + " aghi %2,-1 \n\t" \ + " j 0b \n\t" \ + "1: \n" \ \ : "+a" (pOutput), "+a" (pInput), "+d" (length), "=&a" (tmp) \ : "a" (pTable), "d" (test) \ diff --git a/sysdeps/s390/s390-64/multiarch/memcmp.c b/sysdeps/s390/s390-64/multiarch/memcmp.c index a3607e4..d861e0b 100644 --- a/sysdeps/s390/s390-64/multiarch/memcmp.c +++ b/sysdeps/s390/s390-64/multiarch/memcmp.c @@ -20,5 +20,5 @@ # include s390_libc_ifunc (memcmp) -asm(".weak bcmp ; bcmp = memcmp"); +__asm__(".weak bcmp ; bcmp = memcmp"); #endif diff --git a/sysdeps/s390/s390-64/stackguard-macros.h b/sysdeps/s390/s390-64/stackguard-macros.h index c8270fb..2c97d38 100644 --- a/sysdeps/s390/s390-64/stackguard-macros.h +++ b/sysdeps/s390/s390-64/stackguard-macros.h @@ -1,18 +1,18 @@ #include #define STACK_CHK_GUARD \ - ({ uintptr_t x; asm ("ear %0,%%a0; sllg %0,%0,32; ear %0,%%a1; lg %0,0x28(%0)" : "=a" (x)); x; }) + ({ uintptr_t x; __asm__ ("ear %0,%%a0; sllg %0,%0,32; ear %0,%%a1; lg %0,0x28(%0)" : "=a" (x)); x; }) /* On s390/s390x there is no unique pointer guard, instead we use the same value as the stack guard. */ -#define POINTER_CHK_GUARD \ - ({ \ - uintptr_t x; \ - asm ("ear %0,%%a0;" \ - "sllg %0,%0,32;" \ - "ear %0,%%a1;" \ - "lg %0,%1(%0)" \ - : "=a" (x) \ - : "i" (offsetof (tcbhead_t, stack_guard))); \ - x; \ - }) +#define POINTER_CHK_GUARD \ + ({ \ + uintptr_t x; \ + __asm__ ("ear %0,%%a0;" \ + "sllg %0,%0,32;" \ + "ear %0,%%a1;" \ + "lg %0,%1(%0)" \ + : "=a" (x) \ + : "i" (offsetof (tcbhead_t, stack_guard))); \ + x; \ + }) diff --git a/sysdeps/s390/s390-64/tls-macros.h b/sysdeps/s390/s390-64/tls-macros.h index 3c59436..d70ea6c 100644 --- a/sysdeps/s390/s390-64/tls-macros.h +++ b/sysdeps/s390/s390-64/tls-macros.h @@ -1,88 +1,88 @@ #define TLS_LE(x) \ ({ unsigned long __offset; \ - asm ("bras %0,1f\n" \ - "0:\t.quad " #x "@ntpoff\n" \ - "1:\tlg %0,0(%0)" \ - : "=a" (__offset) : : "cc" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.quad " #x "@ntpoff\n" \ + "1:\tlg %0,0(%0)" \ + : "=a" (__offset) : : "cc" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #ifdef PIC # define TLS_IE(x) \ ({ unsigned long __offset, __got; \ - asm ("bras %0,0f\n\t" \ - ".quad " #x "@gotntpoff\n" \ - "0:\tlarl %1,_GLOBAL_OFFSET_TABLE_\n\t" \ - "lg %0,0(%0)\n\t" \ - "lg %0,0(%0,%1):tls_load:" #x "\n" \ - : "=&a" (__offset), "=&a" (__got) : : "cc" ); \ + __asm__ ("bras %0,0f\n\t" \ + ".quad " #x "@gotntpoff\n" \ + "0:\tlarl %1,_GLOBAL_OFFSET_TABLE_\n\t" \ + "lg %0,0(%0)\n\t" \ + "lg %0,0(%0,%1):tls_load:" #x "\n" \ + : "=&a" (__offset), "=&a" (__got) : : "cc" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #else # define TLS_IE(x) \ ({ unsigned long __offset; \ - asm ("bras %0,1f\n" \ - "0:\t.quad " #x "@indntpoff\n" \ - "1:\t lg %0,0(%0)\n\t" \ - "lg %0,0(%0):tls_load:" #x \ - : "=&a" (__offset) : : "cc" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.quad " #x "@indntpoff\n" \ + "1:\t lg %0,0(%0)\n\t" \ + "lg %0,0(%0):tls_load:" #x \ + : "=&a" (__offset) : : "cc" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #endif #ifdef PIC # define TLS_LD(x) \ ({ unsigned long __offset, __save12; \ - asm ("bras %0,1f\n" \ - "0:\t.quad " #x "@tlsldm\n\t" \ - ".quad " #x "@dtpoff\n" \ - "1:\tlgr %1,%%r12\n\t" \ - "larl %%r12,_GLOBAL_OFFSET_TABLE_\n\t" \ - "lg %%r2,0(%0)\n\t" \ - "brasl %%r14,__tls_get_offset@plt:tls_ldcall:" #x "\n\t" \ - "lg %0,8(%0)\n\t" \ - "algr %0,%%r2\n\t" \ - "lgr %%r12,%1" \ - : "=&a" (__offset), "=&a" (__save12) \ - : : "cc", "0", "1", "2", "3", "4", "5", "14" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.quad " #x "@tlsldm\n\t" \ + ".quad " #x "@dtpoff\n" \ + "1:\tlgr %1,%%r12\n\t" \ + "larl %%r12,_GLOBAL_OFFSET_TABLE_\n\t" \ + "lg %%r2,0(%0)\n\t" \ + "brasl %%r14,__tls_get_offset@plt:tls_ldcall:" #x "\n\t" \ + "lg %0,8(%0)\n\t" \ + "algr %0,%%r2\n\t" \ + "lgr %%r12,%1" \ + : "=&a" (__offset), "=&a" (__save12) \ + : : "cc", "0", "1", "2", "3", "4", "5", "14" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #else # define TLS_LD(x) \ ({ unsigned long __offset; \ - asm ("bras %0,1f\n" \ - "0:\t.quad " #x "@tlsldm\n\t" \ - ".quad " #x "@dtpoff\n" \ - "1:\tlarl %%r12,_GLOBAL_OFFSET_TABLE_\n\t" \ - "lg %%r2,0(%0)\n\t" \ - "brasl %%r14,__tls_get_offset@plt:tls_ldcall:" #x "\n\t" \ - "lg %0,8(%0)\n\t" \ - "algr %0,%%r2" \ - : "=&a" (__offset) \ - : : "cc", "0", "1", "2", "3", "4", "5", "12", "14" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.quad " #x "@tlsldm\n\t" \ + ".quad " #x "@dtpoff\n" \ + "1:\tlarl %%r12,_GLOBAL_OFFSET_TABLE_\n\t" \ + "lg %%r2,0(%0)\n\t" \ + "brasl %%r14,__tls_get_offset@plt:tls_ldcall:" #x "\n\t" \ + "lg %0,8(%0)\n\t" \ + "algr %0,%%r2" \ + : "=&a" (__offset) \ + : : "cc", "0", "1", "2", "3", "4", "5", "12", "14" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #endif #ifdef PIC # define TLS_GD(x) \ ({ unsigned long __offset, __save12; \ - asm ("bras %0,1f\n" \ - "0:\t.quad " #x "@tlsgd\n" \ - "1:\tlgr %1,%%r12\n\t" \ - "larl %%r12,_GLOBAL_OFFSET_TABLE_\n\t" \ - "lg %%r2,0(%0)\n\t" \ - "brasl %%r14,__tls_get_offset@plt:tls_gdcall:" #x "\n\t" \ - "lgr %0,%%r2\n\t" \ - "lgr %%r12,%1" \ - : "=&a" (__offset), "=&a" (__save12) \ - : : "cc", "0", "1", "2", "3", "4", "5", "14" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.quad " #x "@tlsgd\n" \ + "1:\tlgr %1,%%r12\n\t" \ + "larl %%r12,_GLOBAL_OFFSET_TABLE_\n\t" \ + "lg %%r2,0(%0)\n\t" \ + "brasl %%r14,__tls_get_offset@plt:tls_gdcall:" #x "\n\t" \ + "lgr %0,%%r2\n\t" \ + "lgr %%r12,%1" \ + : "=&a" (__offset), "=&a" (__save12) \ + : : "cc", "0", "1", "2", "3", "4", "5", "14" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #else # define TLS_GD(x) \ ({ unsigned long __offset; \ - asm ("bras %0,1f\n" \ - "0:\t.quad " #x "@tlsgd\n" \ - "1:\tlarl %%r12,_GLOBAL_OFFSET_TABLE_\n\t" \ - "lg %%r2,0(%0)\n\t" \ - "brasl %%r14,__tls_get_offset@plt:tls_gdcall:" #x "\n\t" \ - "lgr %0,%%r2" \ - : "=&a" (__offset) \ - : : "cc", "0", "1", "2", "3", "4", "5", "12", "14" ); \ + __asm__ ("bras %0,1f\n" \ + "0:\t.quad " #x "@tlsgd\n" \ + "1:\tlarl %%r12,_GLOBAL_OFFSET_TABLE_\n\t" \ + "lg %%r2,0(%0)\n\t" \ + "brasl %%r14,__tls_get_offset@plt:tls_gdcall:" #x "\n\t" \ + "lgr %0,%%r2" \ + : "=&a" (__offset) \ + : : "cc", "0", "1", "2", "3", "4", "5", "12", "14" ); \ (int *) (__builtin_thread_pointer() + __offset); }) #endif diff --git a/sysdeps/s390/s390-64/utf16-utf32-z9.c b/sysdeps/s390/s390-64/utf16-utf32-z9.c index f887c34..d9177bf 100644 --- a/sysdeps/s390/s390-64/utf16-utf32-z9.c +++ b/sysdeps/s390/s390-64/utf16-utf32-z9.c @@ -163,22 +163,22 @@ gconv_end (struct __gconv_step *data) directions. */ #define HARDWARE_CONVERT(INSTRUCTION) \ { \ - register const unsigned char* pInput asm ("8") = inptr; \ - register unsigned long long inlen asm ("9") = inend - inptr; \ - register unsigned char* pOutput asm ("10") = outptr; \ - register unsigned long long outlen asm("11") = outend - outptr; \ + register const unsigned char* pInput __asm__ ("8") = inptr; \ + register unsigned long long inlen __asm__ ("9") = inend - inptr; \ + register unsigned char* pOutput __asm__ ("10") = outptr; \ + register unsigned long long outlen __asm__("11") = outend - outptr; \ uint64_t cc = 0; \ \ - asm volatile (".machine push \n\t" \ - ".machine \"z9-109\" \n\t" \ - "0: " INSTRUCTION " \n\t" \ - ".machine pop \n\t" \ - " jo 0b \n\t" \ - " ipm %2 \n" \ - : "+a" (pOutput), "+a" (pInput), "+d" (cc), \ - "+d" (outlen), "+d" (inlen) \ - : \ - : "cc", "memory"); \ + __asm__ volatile (".machine push \n\t" \ + ".machine \"z9-109\" \n\t" \ + "0: " INSTRUCTION " \n\t" \ + ".machine pop \n\t" \ + " jo 0b \n\t" \ + " ipm %2 \n" \ + : "+a" (pOutput), "+a" (pInput), "+d" (cc), \ + "+d" (outlen), "+d" (inlen) \ + : \ + : "cc", "memory"); \ \ inptr = pInput; \ outptr = pOutput; \ diff --git a/sysdeps/s390/s390-64/utf8-utf16-z9.c b/sysdeps/s390/s390-64/utf8-utf16-z9.c index 6712c1c..b3bf554 100644 --- a/sysdeps/s390/s390-64/utf8-utf16-z9.c +++ b/sysdeps/s390/s390-64/utf8-utf16-z9.c @@ -145,22 +145,22 @@ gconv_end (struct __gconv_step *data) directions. */ #define HARDWARE_CONVERT(INSTRUCTION) \ { \ - register const unsigned char* pInput asm ("8") = inptr; \ - register unsigned long long inlen asm ("9") = inend - inptr; \ - register unsigned char* pOutput asm ("10") = outptr; \ - register unsigned long long outlen asm("11") = outend - outptr; \ + register const unsigned char* pInput __asm__ ("8") = inptr; \ + register unsigned long long inlen __asm__ ("9") = inend - inptr; \ + register unsigned char* pOutput __asm__ ("10") = outptr; \ + register unsigned long long outlen __asm__("11") = outend - outptr; \ uint64_t cc = 0; \ \ - asm volatile (".machine push \n\t" \ - ".machine \"z9-109\" \n\t" \ - "0: " INSTRUCTION " \n\t" \ - ".machine pop \n\t" \ - " jo 0b \n\t" \ - " ipm %2 \n" \ - : "+a" (pOutput), "+a" (pInput), "+d" (cc), \ - "+d" (outlen), "+d" (inlen) \ - : \ - : "cc", "memory"); \ + __asm__ volatile (".machine push \n\t" \ + ".machine \"z9-109\" \n\t" \ + "0: " INSTRUCTION " \n\t" \ + ".machine pop \n\t" \ + " jo 0b \n\t" \ + " ipm %2 \n" \ + : "+a" (pOutput), "+a" (pInput), "+d" (cc), \ + "+d" (outlen), "+d" (inlen) \ + : \ + : "cc", "memory"); \ \ inptr = pInput; \ outptr = pOutput; \ diff --git a/sysdeps/s390/s390-64/utf8-utf32-z9.c b/sysdeps/s390/s390-64/utf8-utf32-z9.c index 9a74448..fc7aec8 100644 --- a/sysdeps/s390/s390-64/utf8-utf32-z9.c +++ b/sysdeps/s390/s390-64/utf8-utf32-z9.c @@ -149,22 +149,22 @@ gconv_end (struct __gconv_step *data) directions. */ #define HARDWARE_CONVERT(INSTRUCTION) \ { \ - register const unsigned char* pInput asm ("8") = inptr; \ - register unsigned long long inlen asm ("9") = inend - inptr; \ - register unsigned char* pOutput asm ("10") = outptr; \ - register unsigned long long outlen asm("11") = outend - outptr; \ + register const unsigned char* pInput __asm__ ("8") = inptr; \ + register unsigned long long inlen __asm__ ("9") = inend - inptr; \ + register unsigned char* pOutput __asm__ ("10") = outptr; \ + register unsigned long long outlen __asm__("11") = outend - outptr; \ uint64_t cc = 0; \ \ - asm volatile (".machine push \n\t" \ - ".machine \"z9-109\" \n\t" \ - "0: " INSTRUCTION " \n\t" \ - ".machine pop \n\t" \ - " jo 0b \n\t" \ - " ipm %2 \n" \ - : "+a" (pOutput), "+a" (pInput), "+d" (cc), \ - "+d" (outlen), "+d" (inlen) \ - : \ - : "cc", "memory"); \ + __asm__ volatile (".machine push \n\t" \ + ".machine \"z9-109\" \n\t" \ + "0: " INSTRUCTION " \n\t" \ + ".machine pop \n\t" \ + " jo 0b \n\t" \ + " ipm %2 \n" \ + : "+a" (pOutput), "+a" (pInput), "+d" (cc), \ + "+d" (outlen), "+d" (inlen) \ + : \ + : "cc", "memory"); \ \ inptr = pInput; \ outptr = pOutput; \ diff --git a/sysdeps/unix/sysv/linux/s390/brk.c b/sysdeps/unix/sysv/linux/s390/brk.c index 5192629..0e2511c 100644 --- a/sysdeps/unix/sysv/linux/s390/brk.c +++ b/sysdeps/unix/sysv/linux/s390/brk.c @@ -34,12 +34,12 @@ __brk (void *addr) void *newbrk; { - register void *__addr asm("2") = addr; + register void *__addr __asm__("2") = addr; - asm ("svc %b1\n\t" /* call sys_brk */ - : "=d" (__addr) - : "I" (SYS_ify(brk)), "r" (__addr) - : "cc", "memory" ); + __asm__ ("svc %b1\n\t" /* call sys_brk */ + : "=d" (__addr) + : "I" (SYS_ify(brk)), "r" (__addr) + : "cc", "memory" ); newbrk = __addr; } __curbrk = newbrk; diff --git a/sysdeps/unix/sysv/linux/s390/elision-trylock.c b/sysdeps/unix/sysv/linux/s390/elision-trylock.c index 5454b56..093011e 100644 --- a/sysdeps/unix/sysv/linux/s390/elision-trylock.c +++ b/sysdeps/unix/sysv/linux/s390/elision-trylock.c @@ -30,9 +30,9 @@ int __lll_trylock_elision (int *futex, short *adapt_count) { - __asm__ volatile (".machinemode \"zarch_nohighgprs\"\n\t" - ".machine \"all\"" - : : : "memory"); + __asm__ __volatile__ (".machinemode \"zarch_nohighgprs\"\n\t" + ".machine \"all\"" + : : : "memory"); /* Implement POSIX semantics by forbiding nesting elided trylocks. Sorry. After the abort the code is re-executed diff --git a/sysdeps/unix/sysv/linux/s390/s390-32/____longjmp_chk.c b/sysdeps/unix/sysv/linux/s390/s390-32/____longjmp_chk.c index 11c6098..070ccef 100644 --- a/sysdeps/unix/sysv/linux/s390/s390-32/____longjmp_chk.c +++ b/sysdeps/unix/sysv/linux/s390/s390-32/____longjmp_chk.c @@ -34,7 +34,7 @@ { \ uintptr_t cur_sp; \ uintptr_t new_sp = env->__gregs[9]; \ - __asm ("lr %0, %%r15" : "=r" (cur_sp)); \ + __asm__ ("lr %0, %%r15" : "=r" (cur_sp)); \ new_sp ^= guard; \ if (new_sp < cur_sp) \ { \ diff --git a/sysdeps/unix/sysv/linux/s390/s390-32/sysdep.h b/sysdeps/unix/sysv/linux/s390/s390-32/sysdep.h index d29b685..8f4bc79 100644 --- a/sysdeps/unix/sysv/linux/s390/s390-32/sysdep.h +++ b/sysdeps/unix/sysv/linux/s390/s390-32/sysdep.h @@ -197,38 +197,38 @@ #define INTERNAL_SYSCALL_DIRECT(name, err, nr, args...) \ ({ \ DECLARGS_##nr(args) \ - register int _ret asm("2"); \ - asm volatile ( \ - "svc %b1\n\t" \ - : "=d" (_ret) \ - : "i" (__NR_##name) ASMFMT_##nr \ - : "memory" ); \ + register int _ret __asm__("2"); \ + __asm__ __volatile__ ( \ + "svc %b1\n\t" \ + : "=d" (_ret) \ + : "i" (__NR_##name) ASMFMT_##nr \ + : "memory" ); \ _ret; }) #undef INTERNAL_SYSCALL_SVC0 #define INTERNAL_SYSCALL_SVC0(name, err, nr, args...) \ ({ \ DECLARGS_##nr(args) \ - register unsigned long _nr asm("1") = (unsigned long)(__NR_##name); \ - register int _ret asm("2"); \ - asm volatile ( \ - "svc 0\n\t" \ - : "=d" (_ret) \ - : "d" (_nr) ASMFMT_##nr \ - : "memory" ); \ + register unsigned long _nr __asm__("1") = (unsigned long)(__NR_##name); \ + register int _ret __asm__("2"); \ + __asm__ __volatile__ ( \ + "svc 0\n\t" \ + : "=d" (_ret) \ + : "d" (_nr) ASMFMT_##nr \ + : "memory" ); \ _ret; }) #undef INTERNAL_SYSCALL_NCS #define INTERNAL_SYSCALL_NCS(no, err, nr, args...) \ ({ \ DECLARGS_##nr(args) \ - register unsigned long _nr asm("1") = (unsigned long)(no); \ - register int _ret asm("2"); \ - asm volatile ( \ - "svc 0\n\t" \ - : "=d" (_ret) \ - : "d" (_nr) ASMFMT_##nr \ - : "memory" ); \ + register unsigned long _nr __asm__("1") = (unsigned long)(no); \ + register int _ret __asm__("2"); \ + __asm__ __volatile__ ( \ + "svc 0\n\t" \ + : "=d" (_ret) \ + : "d" (_nr) ASMFMT_##nr \ + : "memory" ); \ _ret; }) #undef INTERNAL_SYSCALL @@ -246,22 +246,22 @@ #define DECLARGS_0() #define DECLARGS_1(arg1) \ - register unsigned long gpr2 asm ("2") = (unsigned long)(arg1); + register unsigned long gpr2 __asm__ ("2") = (unsigned long)(arg1); #define DECLARGS_2(arg1, arg2) \ DECLARGS_1(arg1) \ - register unsigned long gpr3 asm ("3") = (unsigned long)(arg2); + register unsigned long gpr3 __asm__ ("3") = (unsigned long)(arg2); #define DECLARGS_3(arg1, arg2, arg3) \ DECLARGS_2(arg1, arg2) \ - register unsigned long gpr4 asm ("4") = (unsigned long)(arg3); + register unsigned long gpr4 __asm__ ("4") = (unsigned long)(arg3); #define DECLARGS_4(arg1, arg2, arg3, arg4) \ DECLARGS_3(arg1, arg2, arg3) \ - register unsigned long gpr5 asm ("5") = (unsigned long)(arg4); + register unsigned long gpr5 __asm__ ("5") = (unsigned long)(arg4); #define DECLARGS_5(arg1, arg2, arg3, arg4, arg5) \ DECLARGS_4(arg1, arg2, arg3, arg4) \ - register unsigned long gpr6 asm ("6") = (unsigned long)(arg5); + register unsigned long gpr6 __asm__ ("6") = (unsigned long)(arg5); #define DECLARGS_6(arg1, arg2, arg3, arg4, arg5, arg6) \ DECLARGS_5(arg1, arg2, arg3, arg4, arg5) \ - register unsigned long gpr7 asm ("7") = (unsigned long)(arg6); + register unsigned long gpr7 __asm__ ("7") = (unsigned long)(arg6); #define ASMFMT_0 #define ASMFMT_1 , "0" (gpr2) @@ -302,14 +302,14 @@ #define INTERNAL_VSYSCALL_CALL(fn, err, nr, args...) \ ({ \ DECLARGS_##nr(args) \ - register long _ret asm("2"); \ - asm volatile ( \ - "lr 10,14\n\t" \ - "basr 14,%1\n\t" \ - "lr 14,10\n\t" \ - : "=d" (_ret) \ - : "d" (fn) ASMFMT_##nr \ - : "cc", "memory", "0", "1", "10" CLOBBER_##nr); \ + register long _ret __asm__("2"); \ + __asm__ __volatile__ ( \ + "lr 10,14\n\t" \ + "basr 14,%1\n\t" \ + "lr 14,10\n\t" \ + : "=d" (_ret) \ + : "d" (fn) ASMFMT_##nr \ + : "cc", "memory", "0", "1", "10" CLOBBER_##nr); \ _ret; }) /* Pointer mangling support. */ diff --git a/sysdeps/unix/sysv/linux/s390/s390-64/____longjmp_chk.c b/sysdeps/unix/sysv/linux/s390/s390-64/____longjmp_chk.c index daf1f6b..6da8d55 100644 --- a/sysdeps/unix/sysv/linux/s390/s390-64/____longjmp_chk.c +++ b/sysdeps/unix/sysv/linux/s390/s390-64/____longjmp_chk.c @@ -34,7 +34,7 @@ { \ uintptr_t cur_sp; \ uintptr_t new_sp = env->__gregs[9]; \ - __asm ("lgr %0, %%r15" : "=r" (cur_sp)); \ + __asm__ ("lgr %0, %%r15" : "=r" (cur_sp)); \ new_sp ^= guard; \ if (new_sp < cur_sp) \ { \ diff --git a/sysdeps/unix/sysv/linux/s390/s390-64/sysdep.h b/sysdeps/unix/sysv/linux/s390/s390-64/sysdep.h index a373207..3492267 100644 --- a/sysdeps/unix/sysv/linux/s390/s390-64/sysdep.h +++ b/sysdeps/unix/sysv/linux/s390/s390-64/sysdep.h @@ -203,38 +203,38 @@ #define INTERNAL_SYSCALL_DIRECT(name, err, nr, args...) \ ({ \ DECLARGS_##nr(args) \ - register long _ret asm("2"); \ - asm volatile ( \ - "svc %b1\n\t" \ - : "=d" (_ret) \ - : "i" (__NR_##name) ASMFMT_##nr \ - : "memory" ); \ + register long _ret __asm__("2"); \ + __asm__ __volatile__ ( \ + "svc %b1\n\t" \ + : "=d" (_ret) \ + : "i" (__NR_##name) ASMFMT_##nr \ + : "memory" ); \ _ret; }) #undef INTERNAL_SYSCALL_SVC0 #define INTERNAL_SYSCALL_SVC0(name, err, nr, args...) \ ({ \ DECLARGS_##nr(args) \ - register unsigned long _nr asm("1") = (unsigned long)(__NR_##name); \ - register long _ret asm("2"); \ - asm volatile ( \ - "svc 0\n\t" \ - : "=d" (_ret) \ - : "d" (_nr) ASMFMT_##nr \ - : "memory" ); \ + register unsigned long _nr __asm__("1") = (unsigned long)(__NR_##name); \ + register long _ret __asm__("2"); \ + __asm__ __volatile__ ( \ + "svc 0\n\t" \ + : "=d" (_ret) \ + : "d" (_nr) ASMFMT_##nr \ + : "memory" ); \ _ret; }) #undef INTERNAL_SYSCALL_NCS #define INTERNAL_SYSCALL_NCS(no, err, nr, args...) \ ({ \ DECLARGS_##nr(args) \ - register unsigned long _nr asm("1") = (unsigned long)(no); \ - register long _ret asm("2"); \ - asm volatile ( \ - "svc 0\n\t" \ - : "=d" (_ret) \ - : "d" (_nr) ASMFMT_##nr \ - : "memory" ); \ + register unsigned long _nr __asm__("1") = (unsigned long)(no); \ + register long _ret __asm__("2"); \ + __asm__ __volatile__ ( \ + "svc 0\n\t" \ + : "=d" (_ret) \ + : "d" (_nr) ASMFMT_##nr \ + : "memory" ); \ _ret; }) #undef INTERNAL_SYSCALL @@ -252,22 +252,22 @@ #define DECLARGS_0() #define DECLARGS_1(arg1) \ - register unsigned long gpr2 asm ("2") = (unsigned long)(arg1); + register unsigned long gpr2 __asm__ ("2") = (unsigned long)(arg1); #define DECLARGS_2(arg1, arg2) \ DECLARGS_1(arg1) \ - register unsigned long gpr3 asm ("3") = (unsigned long)(arg2); + register unsigned long gpr3 __asm__ ("3") = (unsigned long)(arg2); #define DECLARGS_3(arg1, arg2, arg3) \ DECLARGS_2(arg1, arg2) \ - register unsigned long gpr4 asm ("4") = (unsigned long)(arg3); + register unsigned long gpr4 __asm__ ("4") = (unsigned long)(arg3); #define DECLARGS_4(arg1, arg2, arg3, arg4) \ DECLARGS_3(arg1, arg2, arg3) \ - register unsigned long gpr5 asm ("5") = (unsigned long)(arg4); + register unsigned long gpr5 __asm__ ("5") = (unsigned long)(arg4); #define DECLARGS_5(arg1, arg2, arg3, arg4, arg5) \ DECLARGS_4(arg1, arg2, arg3, arg4) \ - register unsigned long gpr6 asm ("6") = (unsigned long)(arg5); + register unsigned long gpr6 __asm__ ("6") = (unsigned long)(arg5); #define DECLARGS_6(arg1, arg2, arg3, arg4, arg5, arg6) \ DECLARGS_5(arg1, arg2, arg3, arg4, arg5) \ - register unsigned long gpr7 asm ("7") = (unsigned long)(arg6); + register unsigned long gpr7 __asm__ ("7") = (unsigned long)(arg6); #define ASMFMT_0 #define ASMFMT_1 , "0" (gpr2) @@ -308,14 +308,14 @@ #define INTERNAL_VSYSCALL_CALL(fn, err, nr, args...) \ ({ \ DECLARGS_##nr(args) \ - register long _ret asm("2"); \ - asm volatile ( \ - "lgr 10,14\n\t" \ - "basr 14,%1\n\t" \ - "lgr 14,10\n\t" \ - : "=d" (_ret) \ - : "a" (fn) ASMFMT_##nr \ - : "cc", "memory", "0", "1", "10" CLOBBER_##nr); \ + register long _ret __asm__("2"); \ + __asm__ __volatile__ ( \ + "lgr 10,14\n\t" \ + "basr 14,%1\n\t" \ + "lgr 14,10\n\t" \ + : "=d" (_ret) \ + : "a" (fn) ASMFMT_##nr \ + : "cc", "memory", "0", "1", "10" CLOBBER_##nr); \ _ret; }) /* Pointer mangling support. */ diff --git a/sysdeps/unix/sysv/linux/s390/sysconf.c b/sysdeps/unix/sysv/linux/s390/sysconf.c index 7d61c50..452af24 100644 --- a/sysdeps/unix/sysv/linux/s390/sysconf.c +++ b/sysdeps/unix/sysv/linux/s390/sysconf.c @@ -55,7 +55,7 @@ get_cache_info (int level, int attr, int type) { /* stfle (or zarch, high-gprs on s390-32) is not available. We are on an old machine. Return 256byte for LINESIZE for L1 d/i-cache, - otherwise 0. */ + otherwise 0. */ if (level == 1 && attr == CACHE_ATTR_LINESIZE) return 256L; else @@ -64,7 +64,7 @@ get_cache_info (int level, int attr, int type) /* Store facility list and check for z10. (see ifunc-resolver for details) */ - register unsigned long reg0 asm("0") = 0; + register unsigned long reg0 __asm__("0") = 0; #ifdef __s390x__ unsigned long stfle_bits; # define STFLE_Z10_MASK (1UL << (63 - 34)) @@ -72,19 +72,19 @@ get_cache_info (int level, int attr, int type) unsigned long long stfle_bits; # define STFLE_Z10_MASK (1ULL << (63 - 34)) #endif /* !__s390x__ */ - asm volatile(".machine push" "\n\t" - ".machinemode \"zarch_nohighgprs\"\n\t" - ".machine \"z9-109\"" "\n\t" - "stfle %0" "\n\t" - ".machine pop" "\n" - : "=QS" (stfle_bits), "+d" (reg0) - : : "cc"); + __asm__ __volatile__(".machine push" "\n\t" + ".machinemode \"zarch_nohighgprs\"\n\t" + ".machine \"z9-109\"" "\n\t" + "stfle %0" "\n\t" + ".machine pop" "\n" + : "=QS" (stfle_bits), "+d" (reg0) + : : "cc"); if (!(stfle_bits & STFLE_Z10_MASK)) { /* We are at least on a z9 machine. Return 256byte for LINESIZE for L1 d/i-cache, - otherwise 0. */ + otherwise 0. */ if (level == 1 && attr == CACHE_ATTR_LINESIZE) return 256L; else @@ -93,15 +93,15 @@ get_cache_info (int level, int attr, int type) /* Check cache topology, if cache is available at this level. */ arg = (CACHE_LEVEL_MAX - level) * 8; - asm volatile (".machine push\n\t" - ".machine \"z10\"\n\t" - ".machinemode \"zarch_nohighgprs\"\n\t" - "ecag %0,%%r0,0\n\t" /* returns 64bit unsigned integer. */ - "srlg %0,%0,0(%1)\n\t" /* right align 8bit cache info field. */ - ".machine pop" - : "=&d" (val) - : "a" (arg) - ); + __asm__ __volatile__ (".machine push\n\t" + ".machine \"z10\"\n\t" + ".machinemode \"zarch_nohighgprs\"\n\t" + "ecag %0,%%r0,0\n\t" /* returns 64bit unsigned integer. */ + "srlg %0,%0,0(%1)\n\t" /* right align 8bit cache info field. */ + ".machine pop" + : "=&d" (val) + : "a" (arg) + ); val &= 0xCUL; /* Extract cache scope information from cache topology summary. (bits 4-5 of 8bit-field; 00 means cache does not exist). */ if (val == 0) @@ -109,14 +109,14 @@ get_cache_info (int level, int attr, int type) /* Get cache information for level, attribute and type. */ cmd = (attr << 4) | ((level - 1) << 1) | type; - asm volatile (".machine push\n\t" - ".machine \"z10\"\n\t" - ".machinemode \"zarch_nohighgprs\"\n\t" - "ecag %0,%%r0,0(%1)\n\t" - ".machine pop" - : "=d" (val) - : "a" (cmd) - ); + __asm__ __volatile__ (".machine push\n\t" + ".machine \"z10\"\n\t" + ".machinemode \"zarch_nohighgprs\"\n\t" + "ecag %0,%%r0,0(%1)\n\t" + ".machine pop" + : "=d" (val) + : "a" (cmd) + ); return val; }