diff mbox

Add atomic type qualifier

Message ID 51F2AEB1.60408@redhat.com
State New
Headers show

Commit Message

Andrew MacLeod July 26, 2013, 5:15 p.m. UTC
This patch adds an atomic type qualifier to GCC.   It can be accessed 
via __attribute__((atomic)) or in C11 mode via the _Atomic keyword.

What it does:
  *  All the __atomic builtins now expect an atomic qualified value for 
the atomic variable.  Non-atomic values can still be passed in, but they 
are not guaranteed to work if the original type is not compatible with 
the atomic type  No fears.  At the moment, every target's atomic types 
line up with the unsigned type of the same size, so like magic, its all 
good for existing code.
  *  There is a new target hook "atomic_align_for_mode" which a target 
can use to override the default alignment when the atomic variation 
requires something different.   There may be other attributes 
eventually, but for now alignment is the only thing supported.  I 
considered size, but the effort to do that is what drove me to the 
re-architecture project :-P  At least the mechanism is now in place for 
overrides when atomic requirements aren't just the default it use to be. 
    (I introduced atomicQI_type_node, atomicHI_type_node,  
atomicSI_type_node,  atomicDI_type_node, atomicTI_type_node, ).  I 
tested this by aligning all shorts to 32 byte boundaries and 
bootstrapping/running all the tests.
  * I went through the front ends trying to treat  atomic qualifier 
mostly like a volatile, since thats is the basic behaviour one expects.  
In the backend, it sets the TREE_IS_VOLATILE bit so behaviour ought to 
be the same as before, plus they are/will be always used in 
__atomic_built-in functions.
  * I changed the libstdc++ atomic implementation so that all the atomic 
classes use __attribute__((atomic)) on their data members, ensuring that 
if anyone overrides the atomic type, it should work fine with C++ 
atomics.   It also served as a good test that I had the type set up 
properly... getting TYPE_CANONICAL() correct throughout the C++ compiler 
was, well..., lets just say painful.
* I changed 2 of the atomic test sets, one uses __attribute__((atomic)) 
to ensure the attribute compiles ok, and the other uses _Atomic and 
--std=c11 to ensure that compiles OK.

What it doesn't do:
   * It doesn't implement the C11 expression expansion into atomic 
built-ins.  ie, you can't write:
_Atomic int x;
  x = 0;
       and have the result be an atomic operation calling __atomic_store 
(&x, 0).   That will be in a  follow on patch. So none of the expression 
expansion from C11 is included yet. This just enables that.
  * It doesn't do a full set of checks when _Atomic is used in invalid 
ways.  I don't know the standard nor the front end well enough.. Im 
hoping someone else will pitch in with that eventually, assuming someone 
cares enough :-)  There are a couple of errors issued, but that is it.

This bootstraps on x86_64, and passes all the testsuites with no new 
regressions.  I didnt split it up any more than this because most of it 
is inter-related... plus this is already split out from a much larger 
set of changes :-) I did try to at least organize the ordering, all the 
long boring stuff is at the end :-)

Both front end and back end stuff in here.  I think I caught it all, but 
have a look. There is time to find any remaining issues I may have missed.

Andrew


HP, you might want to give this a try and see if you can get the 
alignment correct for the cris port finally :-)
Basically, on x86 I tried it with the following 2 snippets which aligned 
atomic short int on 256 bit boundaries, and it bootstrapped and passed 
all the test suites like that as well:

in ix86_option_override_internal():
    targetm.atomic_align_for_mode = atomic_align_override;

And then provided:

static unsigned int
atomic_align_override (enum machine_mode mode)
{
   if (mode == HImode)
     return 256;
   return 0;
}

Comments

Andi Kleen July 26, 2013, 7:01 p.m. UTC | #1
Andrew MacLeod <amacleod@redhat.com> writes:
>
> What it doesn't do:
>   * It doesn't implement the C11 expression expansion into atomic
> built-ins.  ie, you can't write:
> _Atomic int x;
>  x = 0;
>       and have the result be an atomic operation calling
> __atomic_store (&x, 0).   

How would this work if you want a different memory order?

-Andi
Andrew MacLeod July 26, 2013, 7:33 p.m. UTC | #2
On 07/26/2013 03:01 PM, Andi Kleen wrote:
> Andrew MacLeod <amacleod@redhat.com> writes:
>> What it doesn't do:
>>    * It doesn't implement the C11 expression expansion into atomic
>> built-ins.  ie, you can't write:
>> _Atomic int x;
>>   x = 0;
>>        and have the result be an atomic operation calling
>> __atomic_store (&x, 0).
> How would this work if you want a different memory order?
>
>
The way the standard is defined, any implicit operation like that is 
seq_cst.  If you want something other than seq-cst, you have to 
explicitly call atomic_store (&x, 0, model).

C11 also provides all those same atomic routines that c++11 provides for 
this reason.  They just decided to try to implement the c++ atomic 
templates in the language along the way :-P  so it feels very much like 
c++ atomics...

Andrew
Hans-Peter Nilsson July 26, 2013, 8:29 p.m. UTC | #3
On Fri, 26 Jul 2013, Andrew MacLeod wrote:
> This patch adds an atomic type qualifier to GCC.   It can be accessed via
> __attribute__((atomic)) or in C11 mode via the _Atomic keyword.

> HP, you might want to give this a try and see if you can get the alignment
> correct for the cris port finally :-)

Looks like the means to that end are there now.  Thanks!
Though I won't be able to look into it for a while.
(Looking at two more weeks of vacation and likely a hectic
period after that.)

Note to self (mostly): also implement target-specific warning
when the layout of a composite type including an atomic
("naturally" aligned) type would differ from normal (unaligned;
packed) layout.

brgds, H-P
Andi Kleen July 26, 2013, 9:13 p.m. UTC | #4
Andrew MacLeod <amacleod@redhat.com> writes:
>>
> The way the standard is defined, any implicit operation like that is
> seq_cst.  If you want something other than seq-cst, you have to
> explicitly call atomic_store (&x, 0, model).

Thanks.

This doesn't sound like a good default for x86. seq_cst requires 
somewhat expensive extra barriers that most people most likely don't
need.

-Andi
Andrew MacLeod July 26, 2013, 9:34 p.m. UTC | #5
On 07/26/2013 05:13 PM, Andi Kleen wrote:
> Andrew MacLeod <amacleod@redhat.com> writes:
>> The way the standard is defined, any implicit operation like that is
>> seq_cst.  If you want something other than seq-cst, you have to
>> explicitly call atomic_store (&x, 0, model).
> Thanks.
>
> This doesn't sound like a good default for x86. seq_cst requires
> somewhat expensive extra barriers that most people most likely don't
> need.
>
Its the only truly safe default for a multi-threaded environment.

These are the same defaults you get with C++11 as well if you don't 
explicitly use a memory model in your atomic operations:
ie
  __int_type  load(memory_order __m = memory_order_seq_cst) const noexcept

So if you don't want seq-cst, you need to specify exactly what you do want.


Andrew
Joseph Myers July 26, 2013, 11:21 p.m. UTC | #6
On Fri, 26 Jul 2013, Andrew MacLeod wrote:

> This patch adds an atomic type qualifier to GCC.   It can be accessed via
> __attribute__((atomic)) or in C11 mode via the _Atomic keyword.

Why the attribute - why not just the keyword?

I'll review the patch in detail later (which will involve checking each 
reference to "atomic" or "qualified" in the language part of C11, checking 
for appropriate implementation and complaining if it appears to be 
missing, and checking for appropriate testcases and complaining if those 
appear to be missing).  But some comments now:

* pedwarns for using a C11 feature in previous standard modes should as 
per usual practice be pedwarns-if-pedantic.

* I don't see anything obvious in the parser changes to implement the 
_Atomic ( type-name ) version of the syntax for atomic types (6.7.2.4).

* When C11 refers to qualified types, by default it does not include 
atomic types (6.2.5#27).  What's your rationale for including "atomic" in 
TYPE_QUALS rather than making it separate?  With either approach, a review 
of every reference to qualifiers in the front end is needed to determine 
what's correct for "atomic"; did you find that including "atomic" in 
qualifiers in the implementation made for fewer changes?
Andrew MacLeod July 27, 2013, 12:47 a.m. UTC | #7
On 07/26/2013 07:21 PM, Joseph S. Myers wrote:
> On Fri, 26 Jul 2013, Andrew MacLeod wrote:
>
>> This patch adds an atomic type qualifier to GCC.   It can be accessed via
>> __attribute__((atomic)) or in C11 mode via the _Atomic keyword.
> Why the attribute - why not just the keyword?
2 reasons:
   1 - we currently have at least one target who cannot express their 
alignment requirements for an atomic value (ie, a short with 4 byte 
alignment).  Use of__attribute((atomic)) will allow proper expression 
and usage within the atomic built-ins without using C11.
  2 - compatibility with C11 and C++11.  Ultimately, the atomic object 
can be "upsized" (ie,a  6 byte object sized up to be 8 bytes).   _Atomic 
would do that for C11, but we cant really use _Atomic within the c++11 
template files, so __attribute__((atomic)) seemed pretty natural.   As 
well, this will allow c++ atomic templates to work on aforementioned 
target(s).


>
> I'll review the patch in detail later (which will involve checking each
> reference to "atomic" or "qualified" in the language part of C11, checking
> for appropriate implementation and complaining if it appears to be
> missing, and checking for appropriate testcases and complaining if those
> appear to be missing).  But some comments now:
>
> * pedwarns for using a C11 feature in previous standard modes should as
> per usual practice be pedwarns-if-pedantic.
>
> * I don't see anything obvious in the parser changes to implement the
> _Atomic ( type-name ) version of the syntax for atomic types (6.7.2.4).
>
> * When C11 refers to qualified types, by default it does not include
> atomic types (6.2.5#27).  What's your rationale for including "atomic" in
> TYPE_QUALS rather than making it separate?  With either approach, a review
> of every reference to qualifiers in the front end is needed to determine
> what's correct for "atomic"; did you find that including "atomic" in
> qualifiers in the implementation made for fewer changes?
>
I'm not good at front ends... don't really know them at all. Parsing and 
syntax and semantics scare me.   This was my initial attempt to enable 
_Atomic and hoped for help with the rest from those who do :-)  
observation and changes welcome.


Andrew
Joseph Myers July 28, 2013, 7:34 p.m. UTC | #8
On Fri, 26 Jul 2013, Andrew MacLeod wrote:

> What it doesn't do:

* It doesn't implement the stdatomic.h header - do you intend that to be 
provided by GCC or glibc?

(Substantive review of the full patch still to come.)

>   * It doesn't implement the C11 expression expansion into atomic built-ins.
> ie, you can't write:
> _Atomic int x;
>  x = 0;
>       and have the result be an atomic operation calling __atomic_store (&x,
> 0).   That will be in a  follow on patch. So none of the expression expansion
> from C11 is included yet. This just enables that.

The hardest part will probably be compound assignment to an atomic object 
where either operand of the assignment has floating-point type - see C11 
footnote 113, but you can't actually use feholdexcept or feclearexcept or 
feupdateenv here because that would introduce libm dependencies, so back 
ends will need to provide appropriate insn patterns to do those operations 
inline.
Andrew MacLeod July 29, 2013, 2:59 p.m. UTC | #9
On 07/28/2013 03:34 PM, Joseph S. Myers wrote:
> On Fri, 26 Jul 2013, Andrew MacLeod wrote:
>
>> What it doesn't do:
> * It doesn't implement the stdatomic.h header - do you intend that to be
> provided by GCC or glibc?
>
> (Substantive review of the full patch still to come.)

I figured gcc would provide it...   but I hadn't given it a ton of 
thought on whether that was good or bad.  again, that sort of thing 
isn't really my strong suit :-)
>
>>    * It doesn't implement the C11 expression expansion into atomic built-ins.
>> ie, you can't write:
>> _Atomic int x;
>>   x = 0;
>>        and have the result be an atomic operation calling __atomic_store (&x,
>> 0).   That will be in a  follow on patch. So none of the expression expansion
>> from C11 is included yet. This just enables that.
> The hardest part will probably be compound assignment to an atomic object
> where either operand of the assignment has floating-point type - see C11
> footnote 113, but you can't actually use feholdexcept or feclearexcept or
> feupdateenv here because that would introduce libm dependencies, so back
> ends will need to provide appropriate insn patterns to do those operations
> inline.
>   
Blick. What were they smoking the night before... I guess we'll probably 
need to enhance the current atomic patterns in RTL...    We should be 
able to figure out that its floating point and invoke the appropriate 
RTL pattern during expansion rather than an existing one.    OR just 
frigging call libatomic and let it deal with it. :-)  I guess there 
wouldnt be any other fallback available. Actually, thats a mess... no 
way for the librtary to know its floating point unless you tell it 
somehow with new entry points or somesuch..  very lame.

I planned to do the work in gimplification... let the atomic decls 
through, and during gimplification, loads or stores of an atomic decl 
would be converted to the appropriate load or store builtin, and at the 
same time recognize the  'decl = decl op value' expression and replace 
those as appropriate with atomic_op_fetch operations.   I had discussed 
this at some length with Lawrence crowl and Jeffrey Yasskin some time 
ago..   At gimplification time we no longer know whether the original 
form was
decl op= val  or decl = decl op val;, but the decision was that it is ok 
to recognize decl = decl op val and make that atomic.. it would still 
satisfy the language requirements..

Andrew
Joseph Myers July 29, 2013, 4:06 p.m. UTC | #10
On Mon, 29 Jul 2013, Andrew MacLeod wrote:

> Blick. What were they smoking the night before... I guess we'll probably need
> to enhance the current atomic patterns in RTL...    We should be able to
> figure out that its floating point and invoke the appropriate RTL pattern
> during expansion rather than an existing one.    OR just frigging call
> libatomic and let it deal with it. :-)  I guess there wouldnt be any other
> fallback available. Actually, thats a mess... no way for the librtary to know
> its floating point unless you tell it somehow with new entry points or
> somesuch..  very lame.

Note that you only need *one* of the types to be floating-point for this 
issue to apply.  If you have

_Atomic char c;
float f;

c /= f;

then all the same requirements apply; there may be exceptions to discard 
not just from the division, but from the conversion of a float division 
result to char.

> I planned to do the work in gimplification... let the atomic decls through,
> and during gimplification, loads or stores of an atomic decl would be
> converted to the appropriate load or store builtin, and at the same time
> recognize the  'decl = decl op value' expression and replace those as
> appropriate with atomic_op_fetch operations.   I had discussed this at some
> length with Lawrence crowl and Jeffrey Yasskin some time ago..   At
> gimplification time we no longer know whether the original form was
> decl op= val  or decl = decl op val;, but the decision was that it is ok to
> recognize decl = decl op val and make that atomic.. it would still satisfy the
> language requirements..

I think that's probably OK (though, is this a theorem of the formal 
modelling work that has been done on the memory model?), but note it's not 
just a decl but an arbitrary pointer dereference (the address of the 
lvalue is only evaluated once, no matter how many compare-and-exchange 
operations are needed), and the operation may end up looking like

*ptr = (convert to type of *ptr) ((promote) *ptr op (promote) val)

rather than a simple decl = decl op val.  Or something more complicated if 
the operation involves complex numbers - look at what gets for mixed real 
/ complex arithmetic, for example.  Given

_Atomic _Complex float f;
double d;

f += d;

the atomicity is for the whole complex number (and so the 
compare-and-exchange needs to work on the whole number) although only the 
real part is modified by the addition.
Andrew MacLeod July 29, 2013, 4:12 p.m. UTC | #11
On 07/29/2013 10:59 AM, Andrew MacLeod wrote:
> Blick. What were they smoking the night before... I guess we'll 
> probably need to enhance the current atomic patterns in RTL...    We 
> should be able to figure out that its floating point and invoke the 
> appropriate RTL pattern during expansion rather than an existing 
> one.    OR just frigging call libatomic and let it deal with it. :-)  
> I guess there wouldnt be any other fallback available. Actually, thats 
> a mess... no way for the librtary to know its floating point unless 
> you tell it somehow with new entry points or somesuch..  very lame.
>
Actually, in hind sight, we will need new atomic builtins....  It lists 
other operators we need to support that we currently do not: They list
*, / , % + - << >> & ^ |
we currently  don't support *,/, modulus, right or left shift.

Maybe the simple thing is simply to emit the compare_exchange loop for 
all these, and not bother libatomic with them at all.    We can 
eventually add rtl patterns for them if we want "better performance" for 
these new bits.

As for the floating point operations, emit the same loop, and if there 
is a target pattern emit it, otherwise issue a warning saying those 
footnote items are not properly protected or whatever.

Also, If I read this right, "arithmetic type" also includes complex type 
does it not?  which means we need to support this for complex as well...

Does C++14(or onward) intend to support these additions as well?

Andrew
Andrew MacLeod July 29, 2013, 4:29 p.m. UTC | #12
On 07/29/2013 12:06 PM, Joseph S. Myers wrote:
> On Mon, 29 Jul 2013, Andrew MacLeod wrote:
>
>> Blick. What were they smoking the night before... I guess we'll probably need
>> to enhance the current atomic patterns in RTL...    We should be able to
>> figure out that its floating point and invoke the appropriate RTL pattern
>> during expansion rather than an existing one.    OR just frigging call
>> libatomic and let it deal with it. :-)  I guess there wouldnt be any other
>> fallback available. Actually, thats a mess... no way for the librtary to know
>> its floating point unless you tell it somehow with new entry points or
>> somesuch..  very lame.
> Note that you only need *one* of the types to be floating-point for this
> issue to apply.  If you have
>
> _Atomic char c;
> float f;
>
> c /= f;
>
> then all the same requirements apply; there may be exceptions to discard
> not just from the division, but from the conversion of a float division
> result to char.
yes, unfortunately.   but  gimplification should turn this to:
t0 = c
t1 =(float) t0
t2 = t0 / f
t3 = (char) t2
c = t3

so we simply see a floating point division bracketted by load and store 
from a TYPE_ATOMIC expression. so we'd should be able to recognize the 
need for the floating point extra stuff based on the type of the 
operation .  and just wrap the whole blasted thing.

I must say, I'm not a fan :-)

>
>> I planned to do the work in gimplification... let the atomic decls through,
>> and during gimplification, loads or stores of an atomic decl would be
>> converted to the appropriate load or store builtin, and at the same time
>> recognize the  'decl = decl op value' expression and replace those as
>> appropriate with atomic_op_fetch operations.   I had discussed this at some
>> length with Lawrence crowl and Jeffrey Yasskin some time ago..   At
>> gimplification time we no longer know whether the original form was
>> decl op= val  or decl = decl op val;, but the decision was that it is ok to
>> recognize decl = decl op val and make that atomic.. it would still satisfy the
>> language requirements..
> I think that's probably OK (though, is this a theorem of the formal
> modelling work that has been done on the memory model?), but note it's not
I have no idea if its a theorem or not.
> just a decl but an arbitrary pointer dereference (the address of the
> lvalue is only evaluated once, no matter how many compare-and-exchange
> operations are needed), and the operation may end up looking like
>
> *ptr = (convert to type of *ptr) ((promote) *ptr op (promote) val)
>
> rather than a simple decl = decl op val.  Or something more complicated if
I think gimplification takes care of that as well since all assignments 
have to be in the form   DECL = VALUE op VALUE.. it constructs the 
sequence so that something like
*op_expr += 1

is properly transformed to
t0 = op_expr
   t1 = *t0
t2 = t1 + 1
*t0 = t2

With the TYPE_ATOMIC attribute set and rippled through the op_expr 
expression, we know that
*t0 is atomic in nature, so t1 is an atomic load, *t0 is an atomic 
store, and looking back at t2 = t1 + 1 can see that this is an atomic += 1.

same thing with a normal load of an expression... the TYPE_ATOMIC gimple 
attribute *ought* to tell us everything we need.

> the operation involves complex numbers - look at what gets for mixed real
> / complex arithmetic, for example.  Given
>
> _Atomic _Complex float f;
> double d;
>
> f += d;
>
> the atomicity is for the whole complex number (and so the
> compare-and-exchange needs to work on the whole number) although only the
> real part is modified by the addition.


complex I hadn't thought about until just now, I'll have to look.  I 
know we can deal with parts on complex sometimes.   Hopefully at 
gimplification time we still have the whole complex reference and if we 
just take care of that with the atomic builtins, we'll maintain the 
entire thing as we need.

Andrew



>
Joseph Myers July 29, 2013, 4:42 p.m. UTC | #13
On Mon, 29 Jul 2013, Andrew MacLeod wrote:

> complex I hadn't thought about until just now, I'll have to look.  I know we
> can deal with parts on complex sometimes.   Hopefully at gimplification time
> we still have the whole complex reference and if we just take care of that
> with the atomic builtins, we'll maintain the entire thing as we need.

You have things in a fairly complicated form, building up COMPLEX_EXPRs 
out of operations on the individual parts of the complex operands.
Andrew MacLeod July 29, 2013, 5:36 p.m. UTC | #14
On 07/29/2013 12:42 PM, Joseph S. Myers wrote:
> On Mon, 29 Jul 2013, Andrew MacLeod wrote:
>
>> complex I hadn't thought about until just now, I'll have to look.  I know we
>> can deal with parts on complex sometimes.   Hopefully at gimplification time
>> we still have the whole complex reference and if we just take care of that
>> with the atomic builtins, we'll maintain the entire thing as we need.
> You have things in a fairly complicated form, building up COMPLEX_EXPRs
> out of operations on the individual parts of the complex operands.
>
I tried:

   __complex__ double d;
int main (void)
{

   d = 0;
   d = d + 5;
}

and it seems to break it into:
d = __complex__ (0.0, 0.0);

and
d.1 = d;
   d.0 = d.1;
   D.1723 = REALPART_EXPR <d.0>;
   D.1724 = D.1723 + 5.0e+0;
   D.1725 = IMAGPART_EXPR <d.0>;
   d.2 = COMPLEX_EXPR <D.1724, D.1725>;
   d = d.2;

so again the loads and stores to (D) appear to be completely wrap the 
entire complex operation, so this should be handle-able the same way...

So you really should be able to key into the atomic load from D followed 
by the store to D and look at whats in between.

I thini is is straightforward in the gimplifier, we ought to have the d 
=d op V expression at some point and be able to detect that d is the 
same and atomic, and check op.   but if it turns out not to be, then I 
could simply turn those atomic loads and stores into atomic loads and 
stores  in the gimplifier and stop there., Then have a very early 
always-run SSA pass pattern match looking for atomic stores fed from 
atomic loads, and examine the operations in between, looking for 
patterns that match  d = d op v, and then turning the loads/store and 
intermediate bits into the specified compare_exchange loops...

I'll get to this shortly.

Andrew
Andrew MacLeod July 29, 2013, 11:04 p.m. UTC | #15
On 07/29/2013 12:06 PM, Joseph S. Myers wrote:
> On Mon, 29 Jul 2013, Andrew MacLeod wrote:
>
>> I planned to do the work in gimplification... let the atomic decls through,
>> and during gimplification, loads or stores of an atomic decl would be
>> converted to the appropriate load or store builtin, and at the same time
>> recognize the  'decl = decl op value' expression and replace those as
>> appropriate with atomic_op_fetch operations.   I had discussed this at some
>> length with Lawrence crowl and Jeffrey Yasskin some time ago..   At
>> gimplification time we no longer know whether the original form was
>> decl op= val  or decl = decl op val;, but the decision was that it is ok to
>> recognize decl = decl op val and make that atomic.. it would still satisfy the
>> language requirements..
> I think that's probably OK (though, is this a theorem of the formal
> modelling work that has been done on the memory model?), but note it's not
> just a decl but an arbitrary pointer dereference (the address of the
> lvalue is only evaluated once, no matter how many compare-and-exchange
> operations are needed), and the operation may end up looking like
>
> *ptr = (convert to type of *ptr) ((promote) *ptr op (promote) val)
>
> rather than a simple decl = decl op val.  Or something more complicated if
> the operation involves complex numbers - look at what gets for mixed real
> / complex arithmetic, for example.  Given
>
> _Atomic _Complex float f;
> double d;
>
> f += d;
>
> the atomicity is for the whole complex number (and so the
> compare-and-exchange needs to work on the whole number) although only the
> real part is modified by the addition.
>

Ive been poking at this today, and Im wondering what you think of the 
idea of adding a flag to MODIFY_EXPR,
#define MODIFY_EXPR_IS_COMPOUND(NODE) 
MODIFY_EXPR_CHECK(NODE)->base.asm_written_flag

and set that in the MODIFY_EXPR node when we create it from the "x op= 
y" form in the front end.   That flag seems to be free for expressions.

  It will then be trivial to locate these expressions and issue a 
builtin or the wrapper compare_exchange code during gimplification. We 
just check if MODIFY_EXPR_IS_COMPOUND() is true and TYPE_ATOMIC() is set 
on the expression type.   (Ive already confirmed the atomic type is set 
as the attribute ripples up to the MODIFY_EXPR node's type.) then we 
know all the important bits from the MODIFY_EXPR to perform the operation.

Otherwise, it looks like it can get a bit hairy...

What do you think?  As a side effect, we also only get it for the actual 
statements we care about as well.

Andrew
Joseph Myers July 30, 2013, 11:41 a.m. UTC | #16
On Mon, 29 Jul 2013, Andrew MacLeod wrote:

> Ive been poking at this today, and Im wondering what you think of the idea of
> adding a flag to MODIFY_EXPR,
> #define MODIFY_EXPR_IS_COMPOUND(NODE)
> MODIFY_EXPR_CHECK(NODE)->base.asm_written_flag
> 
> and set that in the MODIFY_EXPR node when we create it from the "x op= y" form
> in the front end.   That flag seems to be free for expressions.

My suggestion is that the IR generated by the front end should make it 
completely explicit what may need retrying with a compare-and-exchange, 
rather than relying on non-obvious details to reconstruct the semantics 
required at gimplification time - there are too many transformations 
(folding etc.) that may happen on existing trees and no clear way to be 
confident that you can still identify all the operands accurately after 
such transformations.  That is, an ATOMIC_COMPOUND_MODIFY_EXPR or similar, 
whose operands are: the LHS of the assignment; a temporary variable, "old" 
in C11 footnote 113; the RHS; and the "old op val" expression complete 
with the conversion to the type of the LHS.  Gimplification would then 
(carry out the effects of stabilize_reference on the LHS and save_expr on 
the RHS and) do "old = LHS;" followed by the do-while compare-exchange 
loop.

A flag on the expression could indicate that the floating-point semantics 
are required.  I'd guess back ends would need to provide three insn 
patterns, corresponding to feholdexcept, feclearexcept and feupdateenv, 
that there'd be corresponding built-in functions for these used at 
gimplification time, and that a target hook would give the type used for 
fenv_t by these built-in functions (*not* necessarily the same as the 
fenv_t used by any implementation of the functions in libm).  The target 
should also be able to declare that there's no support for floating-point 
exceptions (e.g. for soft-float) and so floating-point cases don't need 
any special handling.
Andrew MacLeod July 30, 2013, 12:26 p.m. UTC | #17
On 07/30/2013 07:41 AM, Joseph S. Myers wrote:
> On Mon, 29 Jul 2013, Andrew MacLeod wrote:
>
>> Ive been poking at this today, and Im wondering what you think of the idea of
>> adding a flag to MODIFY_EXPR,
>> #define MODIFY_EXPR_IS_COMPOUND(NODE)
>> MODIFY_EXPR_CHECK(NODE)->base.asm_written_flag
>>
>> and set that in the MODIFY_EXPR node when we create it from the "x op= y" form
>> in the front end.   That flag seems to be free for expressions.
> My suggestion is that the IR generated by the front end should make it
> completely explicit what may need retrying with a compare-and-exchange,
> rather than relying on non-obvious details to reconstruct the semantics
> required at gimplification time - there are too many transformations
> (folding etc.) that may happen on existing trees and no clear way to be
> confident that you can still identify all the operands accurately after
> such transformations.  That is, an ATOMIC_COMPOUND_MODIFY_EXPR or similar,
> whose operands are: the LHS of the assignment; a temporary variable, "old"
> in C11 footnote 113; the RHS; and the "old op val" expression complete
> with the conversion to the type of the LHS.  Gimplification would then
> (carry out the effects of stabilize_reference on the LHS and save_expr on
> the RHS and) do "old = LHS;" followed by the do-while compare-exchange
> loop.
In fact, after thinking about it overnight, I came to similar 
conclusions...  I believe it requires new builtin(s) for these 
operations.   Something like

    __atomic_compound_assign (&atomic_expr, enum atomic_operation_type, 
blah, blah,...)

A call to this builtin would be generated right from the parser when it 
sees the op= expression, and the built-in can then travel throughout 
gimple as a normal atomic built-in operation like the rest.  During 
expansion to RTL it can be turned into whatever sequence we happen to 
need.   This is what happens currently with the various 
__atomic_fetch_op and __atomic_op_fetch.  In fact, they are a subset of 
required operations, so I should be able to combine the implementation 
of those with this new one.

  Is C++ planning to match  these behaviours in the atomic library? It 
would need to access this builtin as well so that the C++ template code 
can invoke it.


> A flag on the expression could indicate that the floating-point semantics
> are required.  I'd guess back ends would need to provide three insn
> patterns, corresponding to feholdexcept, feclearexcept and feupdateenv,
> that there'd be corresponding built-in functions for these used at
> gimplification time, and that a target hook would give the type used for
> fenv_t by these built-in functions (*not* necessarily the same as the
> fenv_t used by any implementation of the functions in libm).  The target
> should also be able to declare that there's no support for floating-point
> exceptions (e.g. for soft-float) and so floating-point cases don't need
> any special handling.
>
I think the fact that it requires floating point sematics should be 
determinable from the types of the expressions involved.  If there is a 
floating point somewhere, then we'll need to utilize the patterns.  
we'll still have the types, although it would certainly be easy enough 
to add a flag to the builtin... and maybe thats the way to go after all.

THis also means that for the 3 floating point operations all we need are 
RTL insn patterns, no buitin.  And as with the other atomics, if the 
pattern doesnt exist, we just wont emit it.  we could add a warning 
easily enough in this case.

I think we're somewhere good now :-)

I guess I'll do the same thing for normal references to an atomic 
variable... issue the atomic load or atomic store directly from the 
parser...

Andrew
Joseph Myers July 30, 2013, 1:16 p.m. UTC | #18
On Tue, 30 Jul 2013, Andrew MacLeod wrote:

> > A flag on the expression could indicate that the floating-point semantics
> > are required.  I'd guess back ends would need to provide three insn
> > patterns, corresponding to feholdexcept, feclearexcept and feupdateenv,
> > that there'd be corresponding built-in functions for these used at
> > gimplification time, and that a target hook would give the type used for
> > fenv_t by these built-in functions (*not* necessarily the same as the
> > fenv_t used by any implementation of the functions in libm).  The target
> > should also be able to declare that there's no support for floating-point
> > exceptions (e.g. for soft-float) and so floating-point cases don't need
> > any special handling.
> > 
> I think the fact that it requires floating point sematics should be
> determinable from the types of the expressions involved.  If there is a

My reasoning for such a flag is:

The gimplifier shouldn't need to know the details of the semantics for 
operand promotions, how various arithmetic on complex numbers is carried 
out, and how a result is converted back to the destination type for the 
store.  Even if it's the language-specific gimplification code rather than 
the language-independent gimplifier, we still want those details to be in 
one place (currently build_binary_op for the binary operation semantics), 
shared by atomic and non-atomic operations.

Hence my suggestion that the operands to the new built-in function / 
operation do not include the unmodified RHS, and do not directly specify 
the operation in question.  Instead, one operand is an expression of the 
form

  (convert-to-LHS-type) (old op val)

where "old" and "val" are the temporary variables given in C11 footnote 
113 and probably allocated by the front end ("val" initialized once, "old" 
initialized once but then potentially changing each time through the 
loop).  The trees for that expression would be generated by 
build_binary_op and contain everything required for the semantics of e.g. 
complex arithmetic.

It's true that in the case where floating-point issues arise, 
floating-point types will be involved somewhere in this expression (and 
otherwise, they will not be - though the initializer for "val" might 
itself have involved floating-point arithmetic), but you'd need to search 
recursively for them; having a flag seems simpler.

> THis also means that for the 3 floating point operations all we need are RTL
> insn patterns, no buitin.  And as with the other atomics, if the pattern

I think something will also be needed to specify allocation of the fenv_t 
temporary (whether in memory or registers).

> doesnt exist, we just wont emit it.  we could add a warning easily enough in
> this case.

Note there's a difference between no need to emit it, no warning should be 
given (soft float) and need to emit it but patterns not yet written so 
warning should be given (hard float but patterns not yet implemented for 
the architecture).
Andrew MacLeod July 30, 2013, 1:44 p.m. UTC | #19
On 07/30/2013 09:16 AM, Joseph S. Myers wrot
> My reasoning for such a flag is:
>
>
>
> Hence my suggestion that the operands to the new built-in function /
> operation do not include the unmodified RHS, and do not directly specify
> the operation in question.  Instead, one operand is an expression of the
> form
>
>    (convert-to-LHS-type) (old op val)
>
> where "old" and "val" are the temporary variables given in C11 footnote
> 113 and probably allocated by the front end ("val" initialized once, "old"
> initialized once but then potentially changing each time through the
> loop).  The trees for that expression would be generated by
> build_binary_op and contain everything required for the semantics of e.g.
> complex arithmetic.
>
> It's true that in the case where floating-point issues arise,
> floating-point types will be involved somewhere in this expression (and
> otherwise, they will not be - though the initializer for "val" might
> itself have involved floating-point arithmetic), but you'd need to search
> recursively for them; having a flag seems simpler.
>
I agree.

>> THis also means that for the 3 floating point operations all we need are RTL
>> insn patterns, no buitin.  And as with the other atomics, if the pattern
> I think something will also be needed to specify allocation of the fenv_t
> temporary (whether in memory or registers).
>
>> doesnt exist, we just wont emit it.  we could add a warning easily enough in
>> this case.
> Note there's a difference between no need to emit it, no warning should be
> given (soft float) and need to emit it but patterns not yet written so
> warning should be given (hard float but patterns not yet implemented for
> the architecture).

In fact, the flag could be the presence of the fenv_t variable.. Null 
for that variable field in the builtin indicate you don't need the 
patterns emitted.

I';ll get back to you in a bit with the actual built-in's format once I 
poke around the existing one and see how I can leverage it. I rewrote 
all that code last year and it ought to be pretty simple to add new 
operand support.  It better be anyway :-)

Amndrew
Andrew MacLeod July 30, 2013, 4:49 p.m. UTC | #20
On 07/26/2013 01:15 PM, Andrew MacLeod wrote:
> This patch adds an atomic type qualifier to GCC.   It can be accessed 
> via __attribute__((atomic)) or in C11 mode via the _Atomic keyword.
>
> What it does:
>  *  All the __atomic builtins now expect an atomic qualified value for 
> the atomic variable.  Non-atomic values can still be passed in, but 
> they are not guaranteed to work if the original type is not compatible 
> with the atomic type  No fears.  At the moment, every target's atomic 
> types line up with the unsigned type of the same size, so like magic, 
> its all good for existing code.
>  *  There is a new target hook "atomic_align_for_mode" which a target 
> can use to override the default alignment when the atomic variation 
> requires something different.   There may be other attributes 
> eventually, but for now alignment is the only thing supported.  I 
> considered size, but the effort to do that is what drove me to the 
> re-architecture project :-P  At least the mechanism is now in place 
> for overrides when atomic requirements aren't just the default it use 
> to be.    (I introduced atomicQI_type_node, atomicHI_type_node,  
> atomicSI_type_node, atomicDI_type_node, atomicTI_type_node, ).  I 
> tested this by aligning all shorts to 32 byte boundaries and 
> bootstrapping/running all the tests.
>  * I went through the front ends trying to treat  atomic qualifier 
> mostly like a volatile, since thats is the basic behaviour one 
> expects.  In the backend, it sets the TREE_IS_VOLATILE bit so 
> behaviour ought to be the same as before, plus they are/will be always 
> used in __atomic_built-in functions.
>  * I changed the libstdc++ atomic implementation so that all the 
> atomic classes use __attribute__((atomic)) on their data members, 
> ensuring that if anyone overrides the atomic type, it should work fine 
> with C++ atomics.   It also served as a good test that I had the type 
> set up properly... getting TYPE_CANONICAL() correct throughout the C++ 
> compiler was, well..., lets just say painful.
> * I changed 2 of the atomic test sets, one uses 
> __attribute__((atomic)) to ensure the attribute compiles ok, and the 
> other uses _Atomic and --std=c11 to ensure that compiles OK.
>
> What it doesn't do:
>   * It doesn't implement the C11 expression expansion into atomic 
> built-ins.  ie, you can't write:
> _Atomic int x;
>  x = 0;
>       and have the result be an atomic operation calling 
> __atomic_store (&x, 0).   That will be in a  follow on patch. So none 
> of the expression expansion from C11 is included yet. This just 
> enables that.
>  * It doesn't do a full set of checks when _Atomic is used in invalid 
> ways.  I don't know the standard nor the front end well enough.. Im 
> hoping someone else will pitch in with that eventually, assuming 
> someone cares enough :-)  There are a couple of errors issued, but 
> that is it.
>
> This bootstraps on x86_64, and passes all the testsuites with no new 
> regressions.  I didnt split it up any more than this because most of 
> it is inter-related... plus this is already split out from a much 
> larger set of changes :-) I did try to at least organize the ordering, 
> all the long boring stuff is at the end :-)
>
> Both front end and back end stuff in here.  I think I caught it all, 
> but have a look. There is time to find any remaining issues I may have 
> missed.
>
> Andrew
>
I split the original patch into some smaller hunks, and cleaned up a few 
bit and pieces here and there... following:
Andrew MacLeod July 31, 2013, 12:04 p.m. UTC | #21
On 07/30/2013 09:44 AM, Andrew MacLeod wrote:
> On 07/30/2013 09:16 AM, Joseph S. Myers wrot
>
>
>>> THis also means that for the 3 floating point operations all we need 
>>> are RTL
>>> insn patterns, no buitin.  And as with the other atomics, if the 
>>> pattern
>> I think something will also be needed to specify allocation of the 
>> fenv_t
>> temporary (whether in memory or registers).
>>
>>> doesnt exist, we just wont emit it.  we could add a warning easily 
>>> enough in
>>> this case.
>> Note there's a difference between no need to emit it, no warning 
>> should be
>> given (soft float) and need to emit it but patterns not yet written so
>> warning should be given (hard float but patterns not yet implemented for
>> the architecture).
>
> In fact, the flag could be the presence of the fenv_t variable.. Null 
> for that variable field in the builtin indicate you don't need the 
> patterns emitted.
>
> I';ll get back to you in a bit with the actual built-in's format once 
> I poke around the existing one and see how I can leverage it. I 
> rewrote all that code last year and it ought to be pretty simple to 
> add new operand support.  It better be anyway :-)
>

I worked out the built-ins and what they need to do... and you know 
what?  I'm not sure I see the point any more.

I am going to give a shot at simply expanding this code right in the 
front end.   For the floating and complex types I'll create temps and 
pass them to the generic atomic routines for load and 
compare_exchange... I should be able to directly call the same routines 
that sort out what can be mapped to lock free and what can't.  And in 
the end, the optimizers can sort out how to make things better. that way 
we dont need any support anywhere else. (well, we may need 3 builtins 
for the floating point stuff.. I don't know..  I'll worry about that later.)

On a side note,  after Friday, I'm off for 2 weeks, so I'll be pretty 
quiet until  the 19th or 20th.

Btw, if anyone wants to take a stab at a stdatomic.h file, that would be 
OK with me :-)

Andrew
Andrew MacLeod Aug. 1, 2013, 9:54 p.m. UTC | #22
On 07/26/2013 07:21 PM, Joseph S. Myers wrote:
> On Fri, 26 Jul 2013, Andrew MacLeod wrote:
>
>> This patch adds an atomic type qualifier to GCC.   It can be accessed via
>> __attribute__((atomic)) or in C11 mode via the _Atomic keyword.
> Why the attribute - why not just the keyword?
>
>
> * When C11 refers to qualified types, by default it does not include
> atomic types (6.2.5#27).  What's your rationale for including "atomic" in
> TYPE_QUALS rather than making it separate?  With either approach, a review
> of every reference to qualifiers in the front end is needed to determine
> what's correct for "atomic"; did you find that including "atomic" in
> qualifiers in the implementation made for fewer changes?
>
Ive gotten the expressions mostly working  (= and op= work great), but 
am having some issues with plain loads and identifying the correct time 
to emit the load...   I've also hacked up a rough approximation of what 
stdatomic.h would need.

I also think I'll revisit how atomic values are marked in the front 
end... now that Im doing  all the real work in the front end, I don't 
think I want that attribute all over the place like I did before.     I 
may well decide to remove it from the qualifiers and make it a separate 
type now...   So don't worry about reviewing those files...

I likely wont get to this until I get back from vacation in a couple of 
weeks however.  I will post the state of things before I go, both for my 
own recollection when I return, and in case anyone just happens to feel 
really interested and wants to have a look at some of  the remaining 
required changes.  ha. I wish.    Its starting to get close tho.

Andrew
diff mbox

Patch


	gcc
	* tree.h (struct tree_base): Add atomic_flag field.
	(TYPE_ATOMIC): New accessor macro.
	(enum cv_qualifier): Add TYPE_QUAL_ATOMIC.
	(TYPE_QUALS, TYPE_QUALS_NO_ADDR_SPACE): Add TYPE_QUAL_ATOMIC.
	(TYPE_QUALS_NO_ADDR_SPACE_NO_ATOMIC): New macro.
	(enum tree_index): Add TI_ATOMIC{QHSDT}I_TYPE.
	(atomic{QHSDT}I_type_node): Add new type nodes.
	* emit-rtl.c (set_mem_attributes_minus_bitpos): Atomics are volatile.
	* function.c (assign_stack_temp_for_type): Atomics are volatile.
	* hooks.c (hook_uint_mode_0): Return 0 unit hook.
	* hooks.h (hook_uint_mode_0): Prototype.
	* target.def (atomic_align_for_mode): define hook.
	* print-tree.c (print_node): Print atomic qualifier.
	* tree-pretty-print.c (dump_generic_node): Print atomic type attribute.
	* tree.c (set_type_quals): Set TYPE_ATOMIC.
	(find_atomic_base_type): New.  Function to get atomic base from size.
	(build_qualified_type): Tweak for atomic qualifier overrides.
	(build_atomic_variant): New.  Build atomic variant node.
	(build_common_tree_nodes): Build atomic{QHSDT}I_type_node, allowing
	for override with target hook.
	* alias.c (objects_must_conflict_p): Treat atomics as volatile.
	* calls.c (expand_call): Treat atomics as volatile.

	c-family
	* c-common.h (enum rid): Add RID_ATOMIC.
	* c-common.c (struct c_common_resword c_common_r): Add "_Atomic".
	(struct attribute_spec c_common_att): Add "atomic" attribute.
	(handle_atomic_attribute): New.  Add atomic qualifier to type.
	(sync_resolve_params): Use MAIN_VARIANT as cast for the non-atomic
	parameters.
	(keyword_is_type_qualifier): Add RID_ATOMIC;
	* c-format.c (check_format_types): Add atomic as a qualifer check.
	* c-pretty-print.c (pp_c_cv_qualifiers): Handle atomic qualifier.

	c
	* c-tree.h (struct c_declspecs): Add atomic_p field.
	* c-aux-info.c (gen_type): Handle atomic qualifier.
	* c-decl.c (shadow_tag_warned): Add atomic_p to declspecs check.
	(quals_from_declspecs): Add atomic_p to declspecs check.
	(grokdeclarator): Check atomic and warn of duplicate or errors.
	(build_null_declspecs): Handle atomic_p.
	(declspecs_add_qual): Handle RID_ATOMIC.
	* c-parser.c (c_token_starts_typename): Add RID_ATOMIC.
	(c_token_is_qualifier, c_token_starts_declspecs): Add RID_ATOMIC.
	(c_parser_declspecs, c_parser_attribute_any_word): Add RID_ATOMIC.
	* c-typeck.c (build_indirect_ref): Treat atomic as volatile.
	(build_array_ref, convert_for_assignment): Treat atomic as volatile.

	objc
	* objc-act.c (objc_push_parm): Treat atomic as volatile.

	cp
	* cp-tree.h (CP_TYPE_ATOMIC_P): New macro.
	(enum cp_decl_spec): Add ds_atomic.
	* class.c (build_simple_base_path): Treat atomic as volatile.
	* cvt.c (diagnose_ref_binding): Handle atomic.
	(convert_from_reference, convert_to_void): Treat atomic as volatile.
	* decl.c (grokfndecl): Treat atomic as volatile.
	(build_ptrmemfunc_type): Set TYPE_ATOMIC.
	(grokdeclarator): handle atomic qualifier.
	* mangle.c (dump_substitution_candidates): Add atomic to the qualifier
	list.
	* parser.c (cp_parser_type_specifier): Handle RID_ATOMIC.
	(cp_parser_cv_qualifier_seq_opt): Handle RID_ATOMIC.
	(set_and_check_decl_spec_loc): Add atomic to decl_spec_names[].
	* pt.c (check_cv_quals_for_unify): Add TYPE_QUAL_ATOMIC to check.
	* rtti.c (qualifier_flags): Set atomic qualifier flag.
	* semantics.c (non_const_var_error): Check CP_TYPE_ATOMIC_P too.
	* tree.c (cp_build_qualified_type_real): Add TYPE_QUAL_ATOMIC.
	(cv_unqualified): Add TYPE_QUAL_ATOMIC to mask.
	* typeck.c (build_class_member_access_expr): Treat atomic as volatile.
	(cp_build_indirect_ref, cp_build_array_ref): Treat atomic as volatile.
	(check_return_expr, cp_type_quals): Treat atomic as volatile.
	(cv_qualified_p): Add TYPE_QUAL_ATOMIC to mask.

	libstdc++-v3
	* include/bits/atomic_base.h (struct __atomic_base): Add
	__attribute__((atomic) to member data element.
	(struct __atomic_base<_PTp*>): Add __attribute__((atomic)) to member
	data element.
	* include/std/atomic (struct atomic): Add __attribute__((atomic)) to
	member data element.

	fortran
	* types.def (BT_ATOMIC_PTR, BT_CONST_ATOMIC_PTR): New
	primitive data types for volatile atomic pointers.
	(BT_FN_VOID_APTR): Renamed from BT_FN_VOID_VPTR.
	(BT_FN_VOID_VPTR_INT, BT_FN_BOOL_VPTR_INT,
	BT_FN_BOOL_SIZE_CONST_VPTR): Renamed to APTR variant.
	(BT_FN_I{1,2,4,8,16}_CONST_APTR_INT): New.
	(BT_FN_I{1,2,4,8,16}_APTR_I{1,2,4,8,16}_INT): New.
	(BT_FN_VOID_APTR_I{1,2,4,8,16}_INT): New.
	(BT_FN_VOID_SIZE_VPTR_PTR_INT, BT_FN_VOID_SIZE_CONST_VPTR_PTR_INT,
	BT_FN_VOID_SIZE_VPTR_PTR_PTR_INT, 
	BT_FN_BOOL_VPTR_PTR_I{1,2,4,8,16}_BOOL_INT_INT): Renamed to APTR
	variant.

	gcc
	* builtin-types.def (BT_ATOMIC_PTR, BT_CONST_ATOMIC_PTR): New
	primitive data types for volatile atomic pointers.
	(BT_FN_VOID_APTR): Renamed from BT_FN_VOID_VPTR.
	(BT_FN_VOID_VPTR_INT, BT_FN_BOOL_VPTR_INT,
	BT_FN_BOOL_SIZE_CONST_VPTR): Renamed to APTR variant.
	(BT_FN_I{1,2,4,8,16}_CONST_APTR_INT): New.
	(BT_FN_I{1,2,4,8,16}_APTR_I{1,2,4,8,16}_INT): New.
	(BT_FN_VOID_APTR_I{1,2,4,8,16}_INT): New.
	(BT_FN_VOID_SIZE_VPTR_PTR_INT, BT_FN_VOID_SIZE_CONST_VPTR_PTR_INT,
	BT_FN_VOID_SIZE_VPTR_PTR_PTR_INT, 
	BT_FN_BOOL_VPTR_PTR_I{1,2,4,8,16}_BOOL_INT_INT): Renamed to APTR
	variant.
	* sync-builtins.def: Change all __atomic builtins to use the APTR
	atomic pointer variant for the first parameter instead of VPTR..

	doc
	* generic.texi (CP_TYPE_ATOMIC_P): Document.
	* tm.texi (TARGET_ATOMIC_TYPE_FOR_MODE): Define.
	* doc/tm.texi.in (TARGET_ATOMIC_TYPE_FOR_MODE): Add.

	testsuite
	* gcc.dg/atomic-exchange-{1-5}.c: Change atomic var to use
	__attribute__((atomic)).
	* gcc.dg/atomic-op-{1-5}.c: Add --std=c11 and change atomic var to
	use _Atomic keyword.






Index: gcc/tree.h
===================================================================
*** gcc/tree.h	(revision 201248)
--- gcc/tree.h	(working copy)
*************** struct GTY(()) tree_base {
*** 457,463 ****
        unsigned packed_flag : 1;
        unsigned user_align : 1;
        unsigned nameless_flag : 1;
!       unsigned spare0 : 4;
  
        unsigned spare1 : 8;
  
--- 457,464 ----
        unsigned packed_flag : 1;
        unsigned user_align : 1;
        unsigned nameless_flag : 1;
!       unsigned atomic_flag : 1;
!       unsigned spare0 : 3;
  
        unsigned spare1 : 8;
  
*************** extern enum machine_mode vector_type_mod
*** 2205,2210 ****
--- 2206,2214 ----
  /* Nonzero in a type considered volatile as a whole.  */
  #define TYPE_VOLATILE(NODE) (TYPE_CHECK (NODE)->base.volatile_flag)
  
+ /* Nonzero in a type considered atomic as a whole.  */
+ #define TYPE_ATOMIC(NODE) (TYPE_CHECK (NODE)->base.u.bits.atomic_flag)
+ 
  /* Means this type is const-qualified.  */
  #define TYPE_READONLY(NODE) (TYPE_CHECK (NODE)->base.readonly_flag)
  
*************** enum cv_qualifier
*** 2226,2232 ****
      TYPE_UNQUALIFIED   = 0x0,
      TYPE_QUAL_CONST    = 0x1,
      TYPE_QUAL_VOLATILE = 0x2,
!     TYPE_QUAL_RESTRICT = 0x4
    };
  
  /* Encode/decode the named memory support as part of the qualifier.  If more
--- 2230,2237 ----
      TYPE_UNQUALIFIED   = 0x0,
      TYPE_QUAL_CONST    = 0x1,
      TYPE_QUAL_VOLATILE = 0x2,
!     TYPE_QUAL_RESTRICT = 0x4,
!     TYPE_QUAL_ATOMIC   = 0x8
    };
  
  /* Encode/decode the named memory support as part of the qualifier.  If more
*************** enum cv_qualifier
*** 2245,2250 ****
--- 2250,2256 ----
  #define TYPE_QUALS(NODE)					\
    ((int) ((TYPE_READONLY (NODE) * TYPE_QUAL_CONST)		\
  	  | (TYPE_VOLATILE (NODE) * TYPE_QUAL_VOLATILE)		\
+ 	  | (TYPE_ATOMIC (NODE) * TYPE_QUAL_ATOMIC)		\
  	  | (TYPE_RESTRICT (NODE) * TYPE_QUAL_RESTRICT)		\
  	  | (ENCODE_QUAL_ADDR_SPACE (TYPE_ADDR_SPACE (NODE)))))
  
*************** enum cv_qualifier
*** 2252,2259 ****
--- 2258,2273 ----
  #define TYPE_QUALS_NO_ADDR_SPACE(NODE)				\
    ((int) ((TYPE_READONLY (NODE) * TYPE_QUAL_CONST)		\
  	  | (TYPE_VOLATILE (NODE) * TYPE_QUAL_VOLATILE)		\
+ 	  | (TYPE_ATOMIC (NODE) * TYPE_QUAL_ATOMIC)		\
+ 	  | (TYPE_RESTRICT (NODE) * TYPE_QUAL_RESTRICT)))
+ /* The same as TYPE_QUALS without the address space and atomic 
+    qualifications.  */
+ #define TYPE_QUALS_NO_ADDR_SPACE_NO_ATOMIC(NODE)		\
+   ((int) ((TYPE_READONLY (NODE) * TYPE_QUAL_CONST)		\
+ 	  | (TYPE_VOLATILE (NODE) * TYPE_QUAL_VOLATILE)		\
  	  | (TYPE_RESTRICT (NODE) * TYPE_QUAL_RESTRICT)))
  
+ 
  /* These flags are available for each language front end to use internally.  */
  #define TYPE_LANG_FLAG_0(NODE) (TYPE_CHECK (NODE)->type_common.lang_flag_0)
  #define TYPE_LANG_FLAG_1(NODE) (TYPE_CHECK (NODE)->type_common.lang_flag_1)
*************** enum tree_index
*** 4178,4183 ****
--- 4192,4203 ----
    TI_UINTDI_TYPE,
    TI_UINTTI_TYPE,
  
+   TI_ATOMICQI_TYPE,
+   TI_ATOMICHI_TYPE,
+   TI_ATOMICSI_TYPE,
+   TI_ATOMICDI_TYPE,
+   TI_ATOMICTI_TYPE,
+ 
    TI_UINT16_TYPE,
    TI_UINT32_TYPE,
    TI_UINT64_TYPE,
*************** extern GTY(()) tree global_trees[TI_MAX]
*** 4334,4339 ****
--- 4354,4365 ----
  #define unsigned_intDI_type_node	global_trees[TI_UINTDI_TYPE]
  #define unsigned_intTI_type_node	global_trees[TI_UINTTI_TYPE]
  
+ #define atomicQI_type_node	global_trees[TI_ATOMICQI_TYPE]
+ #define atomicHI_type_node	global_trees[TI_ATOMICHI_TYPE]
+ #define atomicSI_type_node	global_trees[TI_ATOMICSI_TYPE]
+ #define atomicDI_type_node	global_trees[TI_ATOMICDI_TYPE]
+ #define atomicTI_type_node	global_trees[TI_ATOMICTI_TYPE]
+ 
  #define uint16_type_node		global_trees[TI_UINT16_TYPE]
  #define uint32_type_node		global_trees[TI_UINT32_TYPE]
  #define uint64_type_node		global_trees[TI_UINT64_TYPE]
*************** extern tree build_aligned_type (tree, un
*** 5101,5106 ****
--- 5127,5135 ----
  extern tree build_distinct_type_copy (tree);
  extern tree build_variant_type_copy (tree);
  
+ /* Find an atomic base type.  */
+ extern tree find_atomic_base_type (tree, tree *);
+ 
  /* Finish up a builtin RECORD_TYPE. Give it a name and provide its
     fields. Optionally specify an alignment, and then lay it out.  */
  
Index: gcc/emit-rtl.c
===================================================================
*** gcc/emit-rtl.c	(revision 201248)
--- gcc/emit-rtl.c	(working copy)
*************** set_mem_attributes_minus_bitpos (rtx ref
*** 1607,1613 ****
       front-end routine) and use it.  */
    attrs.alias = get_alias_set (t);
  
!   MEM_VOLATILE_P (ref) |= TYPE_VOLATILE (type);
    MEM_POINTER (ref) = POINTER_TYPE_P (type);
  
    /* Default values from pre-existing memory attributes if present.  */
--- 1607,1613 ----
       front-end routine) and use it.  */
    attrs.alias = get_alias_set (t);
  
!   MEM_VOLATILE_P (ref) |= (TYPE_VOLATILE (type) || TYPE_ATOMIC (type));
    MEM_POINTER (ref) = POINTER_TYPE_P (type);
  
    /* Default values from pre-existing memory attributes if present.  */
Index: gcc/function.c
===================================================================
*** gcc/function.c	(revision 201248)
--- gcc/function.c	(working copy)
*************** assign_stack_temp_for_type (enum machine
*** 901,907 ****
  
    /* If a type is specified, set the relevant flags.  */
    if (type != 0)
!     MEM_VOLATILE_P (slot) = TYPE_VOLATILE (type);
    MEM_NOTRAP_P (slot) = 1;
  
    return slot;
--- 901,907 ----
  
    /* If a type is specified, set the relevant flags.  */
    if (type != 0)
!     MEM_VOLATILE_P (slot) = (TYPE_VOLATILE (type) || TYPE_ATOMIC (type));
    MEM_NOTRAP_P (slot) = 1;
  
    return slot;
Index: gcc/hooks.c
===================================================================
*** gcc/hooks.c	(revision 201248)
--- gcc/hooks.c	(working copy)
*************** hook_rtx_tree_int_null (tree a ATTRIBUTE
*** 352,357 ****
--- 352,364 ----
    return NULL;
  }
  
+ /* Generic hook that takes a machine mode and returns an unsigned int 0.  */
+ unsigned int
+ hook_uint_mode_0 (enum machine_mode m ATTRIBUTE_UNUSED)
+ {
+   return 0;
+ }
+ 
  /* Generic hook that takes three trees and returns the last one as is.  */
  tree
  hook_tree_tree_tree_tree_3rd_identity (tree a ATTRIBUTE_UNUSED,
Index: gcc/hooks.h
===================================================================
*** gcc/hooks.h	(revision 201248)
--- gcc/hooks.h	(working copy)
*************** extern tree hook_tree_tree_tree_tree_3rd
*** 89,94 ****
--- 89,95 ----
  extern tree hook_tree_tree_int_treep_bool_null (tree, int, tree *, bool);
  
  extern unsigned hook_uint_void_0 (void);
+ extern unsigned int hook_uint_mode_0 (enum machine_mode);
  
  extern bool default_can_output_mi_thunk_no_vcall (const_tree, HOST_WIDE_INT,
  						  HOST_WIDE_INT, const_tree);
Index: gcc/target.def
===================================================================
*** gcc/target.def	(revision 201248)
--- gcc/target.def	(working copy)
*************** DEFHOOKPOD
*** 5116,5122 ****
   @code{atomic_test_and_set} is not exactly 1, i.e. the\
   @code{bool} @code{true}.",
   unsigned char, 1)
!  
  /* Leave the boolean fields at the end.  */
  
  /* True if we can create zeroed data by switching to a BSS section
--- 5116,5133 ----
   @code{atomic_test_and_set} is not exactly 1, i.e. the\
   @code{bool} @code{true}.",
   unsigned char, 1)
! 
! /* Return a tree type representing the atomic type which maps to machine MODE.
!    This allows both alignment and size to be overridden as needed.  */
! DEFHOOK
! (atomic_align_for_mode,
! "If defined, this function returns an appropriate alignment in bits for an\
!  atomic object of machine_mode @var{mode}.  If 0 is returned then the\
!  default alignment for the specified mode is used. ",
!  unsigned int, (enum machine_mode mode),
!  hook_uint_mode_0)
! 
! 
  /* Leave the boolean fields at the end.  */
  
  /* True if we can create zeroed data by switching to a BSS section
Index: gcc/print-tree.c
===================================================================
*** gcc/print-tree.c	(revision 201248)
--- gcc/print-tree.c	(working copy)
*************** print_node (FILE *file, const char *pref
*** 304,309 ****
--- 304,311 ----
  
    if (TYPE_P (node) ? TYPE_READONLY (node) : TREE_READONLY (node))
      fputs (" readonly", file);
+   if (TYPE_P (node) ? TYPE_ATOMIC (node) : TYPE_ATOMIC (node))
+     fputs (" atomic", file);
    if (!TYPE_P (node) && TREE_CONSTANT (node))
      fputs (" constant", file);
    else if (TYPE_P (node) && TYPE_SIZES_GIMPLIFIED (node))
Index: gcc/tree-pretty-print.c
===================================================================
*** gcc/tree-pretty-print.c	(revision 201248)
--- gcc/tree-pretty-print.c	(working copy)
*************** dump_generic_node (pretty_printer *buffe
*** 679,684 ****
--- 679,686 ----
  	unsigned int quals = TYPE_QUALS (node);
  	enum tree_code_class tclass;
  
+ 	if (quals & TYPE_QUAL_ATOMIC)
+ 	  pp_string (buffer, "atomic ");
  	if (quals & TYPE_QUAL_CONST)
  	  pp_string (buffer, "const ");
  	else if (quals & TYPE_QUAL_VOLATILE)
*************** dump_generic_node (pretty_printer *buffe
*** 980,985 ****
--- 982,989 ----
        {
  	unsigned int quals = TYPE_QUALS (node);
  
+ 	if (quals & TYPE_QUAL_ATOMIC)
+ 	  pp_string (buffer, "atomic ");
  	if (quals & TYPE_QUAL_CONST)
  	  pp_string (buffer, "const ");
  	if (quals & TYPE_QUAL_VOLATILE)
Index: gcc/tree.c
===================================================================
*** gcc/tree.c	(revision 201248)
--- gcc/tree.c	(working copy)
*************** set_type_quals (tree type, int type_qual
*** 5937,5942 ****
--- 5937,5943 ----
    TYPE_READONLY (type) = (type_quals & TYPE_QUAL_CONST) != 0;
    TYPE_VOLATILE (type) = (type_quals & TYPE_QUAL_VOLATILE) != 0;
    TYPE_RESTRICT (type) = (type_quals & TYPE_QUAL_RESTRICT) != 0;
+   TYPE_ATOMIC (type) = (type_quals & TYPE_QUAL_ATOMIC) != 0;
    TYPE_ADDR_SPACE (type) = DECODE_QUAL_ADDR_SPACE (type_quals);
  }
  
*************** check_aligned_type (const_tree cand, con
*** 5970,5975 ****
--- 5971,6027 ----
  				   TYPE_ATTRIBUTES (base)));
  }
  
+ /* This function checks to see if TYPE matches the size one of the built in 
+    atomic types, and returns that atomic type.
+    The non-atomic base type is also returned if NONATOMIC_TYPE is non-NULL.  */
+ 
+ tree
+ find_atomic_base_type (tree type, tree *nonatomic_type)
+ {
+   tree base_type, base_atomic_type;
+ 
+   if (!TYPE_P (type) || type == void_type_node)
+     return NULL_TREE;
+ 
+   HOST_WIDE_INT type_size = tree_low_cst (TYPE_SIZE (type), 1);
+   switch (type_size)
+     {
+     case 8:
+       base_atomic_type = atomicQI_type_node;
+       base_type = unsigned_intQI_type_node;
+       break;
+ 
+     case 16:
+       base_atomic_type = atomicHI_type_node;
+       base_type = unsigned_intHI_type_node;
+       break;
+ 
+     case 32:
+       base_atomic_type = atomicSI_type_node;
+       base_type = unsigned_intSI_type_node;
+       break;
+ 
+     case 64:
+       base_atomic_type = atomicDI_type_node;
+       base_type = unsigned_intDI_type_node;
+       break;
+ 
+     case 128:
+       base_atomic_type = atomicTI_type_node;
+       base_type = unsigned_intTI_type_node;
+       break;
+ 
+     default:
+       base_atomic_type = NULL_TREE;
+       base_type = NULL_TREE;
+     }
+ 
+   if (nonatomic_type)
+     *nonatomic_type = base_type;
+ 
+   return base_atomic_type;
+ }
+ 
  /* Return a version of the TYPE, qualified as indicated by the
     TYPE_QUALS, if one exists.  If no qualified version exists yet,
     return NULL_TREE.  */
*************** build_qualified_type (tree type, int typ
*** 6009,6014 ****
--- 6061,6074 ----
        t = build_variant_type_copy (type);
        set_type_quals (t, type_quals);
  
+       if (((type_quals & TYPE_QUAL_ATOMIC) == TYPE_QUAL_ATOMIC)
+ 	  && INTEGRAL_TYPE_P (type))
+ 	{
+ 	  tree atomic_base_type = find_atomic_base_type (type, NULL);
+ 	  if (atomic_base_type)
+ 	    TYPE_ALIGN (t) = TYPE_ALIGN (atomic_base_type);
+ 	}
+ 
        if (TYPE_STRUCTURAL_EQUALITY_P (type))
  	/* Propagate structural equality. */
  	SET_TYPE_STRUCTURAL_EQUALITY (t);
*************** make_or_reuse_accum_type (unsigned size,
*** 9521,9526 ****
--- 9581,9611 ----
    return make_accum_type (size, unsignedp, satp);
  }
  
+ 
+ /* Create an atomic variant node for TYPE.  This routine is called during
+    initialization of data types to create the 5 basic atomic types. The generic
+    build_variant_type function requires these to already be set up in order to
+    function properly, so cannot be called from there.  
+    if ALIGN is non-zero, then ensure alignment is overridden to this value.  */
+ 
+ static tree
+ build_atomic_variant (tree type, unsigned int align)
+ {
+   tree t;
+ 
+   /* Make sure its not already registered.  */
+   if ((t = get_qualified_type (type, TYPE_QUAL_ATOMIC)))
+     return t;
+   
+   t = build_variant_type_copy (type);
+   set_type_quals (t, TYPE_QUAL_ATOMIC);
+ 
+   if (align)
+     TYPE_ALIGN (t) = align;
+ 
+   return t;
+ }
+ 
  /* Create nodes for all integer types (and error_mark_node) using the sizes
     of C datatypes.  SIGNED_CHAR specifies whether char is signed,
     SHORT_DOUBLE specifies whether double should be of the same precision
*************** build_common_tree_nodes (bool signed_cha
*** 9603,9608 ****
--- 9688,9708 ----
    unsigned_intDI_type_node = make_or_reuse_type (GET_MODE_BITSIZE (DImode), 1);
    unsigned_intTI_type_node = make_or_reuse_type (GET_MODE_BITSIZE (TImode), 1);
  
+   /* Dont call build_qualified type for atomics.  That routine does special
+      processing for atomics, and until they are initialized its better not
+      to make that call.  
+      
+      Check to see if there is a target override for atomic types.  */
+ 
+ #define SET_ATOMIC_TYPE_NODE(TYPE, MODE, DEFAULT) 		\
+  (TYPE) = build_atomic_variant (DEFAULT, targetm.atomic_align_for_mode (MODE));
+ 
+   SET_ATOMIC_TYPE_NODE (atomicQI_type_node, QImode, unsigned_intQI_type_node);
+   SET_ATOMIC_TYPE_NODE (atomicHI_type_node, HImode, unsigned_intHI_type_node);
+   SET_ATOMIC_TYPE_NODE (atomicSI_type_node, SImode, unsigned_intSI_type_node);
+   SET_ATOMIC_TYPE_NODE (atomicDI_type_node, DImode, unsigned_intDI_type_node);
+   SET_ATOMIC_TYPE_NODE (atomicTI_type_node, TImode, unsigned_intTI_type_node);
+ 
    access_public_node = get_identifier ("public");
    access_protected_node = get_identifier ("protected");
    access_private_node = get_identifier ("private");
Index: gcc/alias.c
===================================================================
*** gcc/alias.c	(revision 201248)
--- gcc/alias.c	(working copy)
*************** objects_must_conflict_p (tree t1, tree t
*** 487,493 ****
    /* If they are the same type, they must conflict.  */
    if (t1 == t2
        /* Likewise if both are volatile.  */
!       || (t1 != 0 && TYPE_VOLATILE (t1) && t2 != 0 && TYPE_VOLATILE (t2)))
      return 1;
  
    set1 = t1 ? get_alias_set (t1) : 0;
--- 487,494 ----
    /* If they are the same type, they must conflict.  */
    if (t1 == t2
        /* Likewise if both are volatile.  */
!       || (t1 != 0 && (TYPE_VOLATILE (t1) || TYPE_ATOMIC (t1)) 
! 	  && t2 != 0 && (TYPE_VOLATILE (t2) || TYPE_ATOMIC (t2))))
      return 1;
  
    set1 = t1 ? get_alias_set (t1) : 0;
Index: gcc/calls.c
===================================================================
*** gcc/calls.c	(revision 201248)
--- gcc/calls.c	(working copy)
*************** expand_call (tree exp, rtx target, int i
*** 2592,2597 ****
--- 2592,2598 ----
  	 optimized.  */
        || (flags & (ECF_RETURNS_TWICE | ECF_NORETURN))
        || TYPE_VOLATILE (TREE_TYPE (TREE_TYPE (addr)))
+       || TYPE_ATOMIC (TREE_TYPE (TREE_TYPE (addr)))
        /* If the called function is nested in the current one, it might access
  	 some of the caller's arguments, but could clobber them beforehand if
  	 the argument areas are shared.  */
Index: gcc/c-family/c-common.h
===================================================================
*** gcc/c-family/c-common.h	(revision 201248)
--- gcc/c-family/c-common.h	(working copy)
*************** enum rid
*** 66,72 ****
    RID_UNSIGNED, RID_LONG,    RID_CONST, RID_EXTERN,
    RID_REGISTER, RID_TYPEDEF, RID_SHORT, RID_INLINE,
    RID_VOLATILE, RID_SIGNED,  RID_AUTO,  RID_RESTRICT,
!   RID_NORETURN,
  
    /* C extensions */
    RID_COMPLEX, RID_THREAD, RID_SAT,
--- 66,72 ----
    RID_UNSIGNED, RID_LONG,    RID_CONST, RID_EXTERN,
    RID_REGISTER, RID_TYPEDEF, RID_SHORT, RID_INLINE,
    RID_VOLATILE, RID_SIGNED,  RID_AUTO,  RID_RESTRICT,
!   RID_NORETURN, RID_ATOMIC,
  
    /* C extensions */
    RID_COMPLEX, RID_THREAD, RID_SAT,
Index: gcc/c-family/c-common.c
===================================================================
*** gcc/c-family/c-common.c	(revision 201248)
--- gcc/c-family/c-common.c	(working copy)
*************** static tree handle_unused_attribute (tre
*** 325,330 ****
--- 325,331 ----
  static tree handle_externally_visible_attribute (tree *, tree, tree, int,
  						 bool *);
  static tree handle_const_attribute (tree *, tree, tree, int, bool *);
+ static tree handle_atomic_attribute (tree *, tree, tree, int, bool *);
  static tree handle_transparent_union_attribute (tree *, tree, tree,
  						int, bool *);
  static tree handle_constructor_attribute (tree *, tree, tree, int, bool *);
*************** const struct c_common_resword c_common_r
*** 401,406 ****
--- 402,408 ----
  {
    { "_Alignas",		RID_ALIGNAS,   D_CONLY },
    { "_Alignof",		RID_ALIGNOF,   D_CONLY },
+   { "_Atomic",		RID_ATOMIC,    D_CONLY },
    { "_Bool",		RID_BOOL,      D_CONLY },
    { "_Complex",		RID_COMPLEX,	0 },
    { "_Imaginary",	RID_IMAGINARY, D_CONLY },
*************** const struct attribute_spec c_common_att
*** 637,642 ****
--- 639,646 ----
    /* The same comments as for noreturn attributes apply to const ones.  */
    { "const",                  0, 0, true,  false, false,
  			      handle_const_attribute, false },
+   { "atomic",		      0, 0, false, true, false,
+ 			      handle_atomic_attribute, false},
    { "transparent_union",      0, 0, false, false, false,
  			      handle_transparent_union_attribute, false },
    { "constructor",            0, 1, true,  false, false,
*************** handle_const_attribute (tree *node, tree
*** 6842,6847 ****
--- 6846,6879 ----
    return NULL_TREE;
  }
  
+ 
+ /* Handle an "atomic" attribute; arguments as in
+    struct attribute_spec.handler.  */
+ 
+ static tree
+ handle_atomic_attribute (tree *node, tree name, tree ARG_UNUSED (args),
+ 			int ARG_UNUSED (flags), bool *no_add_attrs)
+ {
+   bool ignored = true;
+   if (TYPE_P (*node) && TREE_CODE (*node) != ARRAY_TYPE)
+     {
+       tree type = *node;
+ 
+       if (!TYPE_ATOMIC (type))
+ 	{
+ 	  *node = build_qualified_type (type, TYPE_QUAL_ATOMIC);
+ 	  ignored = false;
+ 	}
+     }
+ 
+   if (ignored)
+     {
+       warning (OPT_Wattributes, "%qE attribute ignored", name);
+       *no_add_attrs = true;
+     }
+   return NULL_TREE;
+ }
+ 
  /* Handle a "transparent_union" attribute; arguments as in
     struct attribute_spec.handler.  */
  
*************** sync_resolve_params (location_t loc, tre
*** 10012,10022 ****
    unsigned int parmnum;
  
    function_args_iter_init (&iter, TREE_TYPE (function));
!   /* We've declared the implementation functions to use "volatile void *"
       as the pointer parameter, so we shouldn't get any complaints from the
       call to check_function_arguments what ever type the user used.  */
    function_args_iter_next (&iter);
    ptype = TREE_TYPE (TREE_TYPE ((*params)[0]));
  
    /* For the rest of the values, we need to cast these to FTYPE, so that we
       don't get warnings for passing pointer types, etc.  */
--- 10044,10055 ----
    unsigned int parmnum;
  
    function_args_iter_init (&iter, TREE_TYPE (function));
!   /* We've declared the implementation functions to use "atomic volatile void *"
       as the pointer parameter, so we shouldn't get any complaints from the
       call to check_function_arguments what ever type the user used.  */
    function_args_iter_next (&iter);
    ptype = TREE_TYPE (TREE_TYPE ((*params)[0]));
+   ptype = TYPE_MAIN_VARIANT (ptype);
  
    /* For the rest of the values, we need to cast these to FTYPE, so that we
       don't get warnings for passing pointer types, etc.  */
*************** keyword_is_type_qualifier (enum rid keyw
*** 11388,11393 ****
--- 11421,11427 ----
      case RID_CONST:
      case RID_VOLATILE:
      case RID_RESTRICT:
+     case RID_ATOMIC:
        return true;
      default:
        return false;
Index: gcc/c-family/c-format.c
===================================================================
*** gcc/c-family/c-format.c	(revision 201248)
--- gcc/c-family/c-format.c	(working copy)
*************** check_format_types (format_wanted_type *
*** 2374,2379 ****
--- 2374,2380 ----
  		  && pedantic
  		  && (TYPE_READONLY (cur_type)
  		      || TYPE_VOLATILE (cur_type)
+ 		      || TYPE_ATOMIC (cur_type)
  		      || TYPE_RESTRICT (cur_type)))
  		warning (OPT_Wformat_, "extra type qualifiers in format "
  			 "argument (argument %d)",
Index: gcc/c-family/c-pretty-print.c
===================================================================
*** gcc/c-family/c-pretty-print.c	(revision 201248)
--- gcc/c-family/c-pretty-print.c	(working copy)
*************** pp_c_cv_qualifiers (c_pretty_printer *pp
*** 186,193 ****
--- 186,201 ----
    if (p != NULL && (*p == '*' || *p == '&'))
      pp_c_whitespace (pp);
  
+   if (qualifiers & TYPE_QUAL_ATOMIC)
+     {
+       pp_c_ws_string (pp, func_type ? "__attribute__((atomic))" : "atomic");
+       previous = true;
+     }
+ 
    if (qualifiers & TYPE_QUAL_CONST)
      {
+       if (previous)
+         pp_c_whitespace (pp);
        pp_c_ws_string (pp, func_type ? "__attribute__((const))" : "const");
        previous = true;
      }
Index: gcc/c/c-tree.h
===================================================================
*** gcc/c/c-tree.h	(revision 201248)
--- gcc/c/c-tree.h	(working copy)
*************** struct c_declspecs {
*** 328,333 ****
--- 328,335 ----
    BOOL_BITFIELD volatile_p : 1;
    /* Whether "restrict" was specified.  */
    BOOL_BITFIELD restrict_p : 1;
+   /* Whether "_Atomic" was specified.  */
+   BOOL_BITFIELD atomic_p : 1;
    /* Whether "_Sat" was specified.  */
    BOOL_BITFIELD saturating_p : 1;
    /* Whether any alignment specifier (even with zero alignment) was
Index: gcc/c/c-aux-info.c
===================================================================
*** gcc/c/c-aux-info.c	(revision 201248)
--- gcc/c/c-aux-info.c	(working copy)
*************** gen_type (const char *ret_val, tree t, f
*** 285,290 ****
--- 285,292 ----
        switch (TREE_CODE (t))
  	{
  	case POINTER_TYPE:
+ 	  if (TYPE_ATOMIC (t))
+ 	    ret_val = concat ("atomic ", ret_val, NULL);
  	  if (TYPE_READONLY (t))
  	    ret_val = concat ("const ", ret_val, NULL);
  	  if (TYPE_VOLATILE (t))
*************** gen_type (const char *ret_val, tree t, f
*** 425,430 ****
--- 427,434 ----
  	  gcc_unreachable ();
  	}
      }
+   if (TYPE_ATOMIC (t))
+     ret_val = concat ("atomic ", ret_val, NULL);
    if (TYPE_READONLY (t))
      ret_val = concat ("const ", ret_val, NULL);
    if (TYPE_VOLATILE (t))
Index: gcc/c/c-decl.c
===================================================================
*** gcc/c/c-decl.c	(revision 201248)
--- gcc/c/c-decl.c	(working copy)
*************** shadow_tag_warned (const struct c_declsp
*** 3712,3717 ****
--- 3712,3718 ----
                     && declspecs->typespec_kind != ctsk_tagfirstref
  		   && (declspecs->const_p
  		       || declspecs->volatile_p
+ 		       || declspecs->atomic_p
  		       || declspecs->restrict_p
  		       || declspecs->address_space))
  	    {
*************** shadow_tag_warned (const struct c_declsp
*** 3801,3806 ****
--- 3802,3808 ----
  
    if (!warned && !in_system_header && (declspecs->const_p
  				       || declspecs->volatile_p
+ 				       || declspecs->atomic_p
  				       || declspecs->restrict_p
  				       || declspecs->address_space))
      {
*************** quals_from_declspecs (const struct c_dec
*** 3832,3837 ****
--- 3834,3840 ----
    int quals = ((specs->const_p ? TYPE_QUAL_CONST : 0)
  	       | (specs->volatile_p ? TYPE_QUAL_VOLATILE : 0)
  	       | (specs->restrict_p ? TYPE_QUAL_RESTRICT : 0)
+ 	       | (specs->atomic_p ? TYPE_QUAL_ATOMIC : 0)
  	       | (ENCODE_QUAL_ADDR_SPACE (specs->address_space)));
    gcc_assert (!specs->type
  	      && !specs->decl_attr
*************** grokdeclarator (const struct c_declarato
*** 4911,4916 ****
--- 4914,4920 ----
    int constp;
    int restrictp;
    int volatilep;
+   int atomicp;
    int type_quals = TYPE_UNQUALIFIED;
    tree name = NULL_TREE;
    bool funcdef_flag = false;
*************** grokdeclarator (const struct c_declarato
*** 5065,5070 ****
--- 5069,5075 ----
    constp = declspecs->const_p + TYPE_READONLY (element_type);
    restrictp = declspecs->restrict_p + TYPE_RESTRICT (element_type);
    volatilep = declspecs->volatile_p + TYPE_VOLATILE (element_type);
+   atomicp = declspecs->atomic_p + TYPE_ATOMIC (element_type);
    as1 = declspecs->address_space;
    as2 = TYPE_ADDR_SPACE (element_type);
    address_space = ADDR_SPACE_GENERIC_P (as1)? as2 : as1;
*************** grokdeclarator (const struct c_declarato
*** 5077,5082 ****
--- 5082,5090 ----
  	pedwarn (loc, OPT_Wpedantic, "duplicate %<restrict%>");
        if (volatilep > 1)
  	pedwarn (loc, OPT_Wpedantic, "duplicate %<volatile%>");
+       if (atomicp > 1)
+ 	pedwarn (loc, OPT_Wpedantic, "duplicate %<_Atomic%>");
+ 
      }
  
    if (!ADDR_SPACE_GENERIC_P (as1) && !ADDR_SPACE_GENERIC_P (as2) && as1 != as2)
*************** grokdeclarator (const struct c_declarato
*** 5090,5095 ****
--- 5098,5104 ----
    type_quals = ((constp ? TYPE_QUAL_CONST : 0)
  		| (restrictp ? TYPE_QUAL_RESTRICT : 0)
  		| (volatilep ? TYPE_QUAL_VOLATILE : 0)
+ 		| (atomicp ? TYPE_QUAL_ATOMIC : 0)
  		| ENCODE_QUAL_ADDR_SPACE (address_space));
  
    /* Warn about storage classes that are invalid for certain
*************** grokdeclarator (const struct c_declarato
*** 5560,5565 ****
--- 5569,5580 ----
  		array_ptr_attrs = NULL_TREE;
  		array_parm_static = 0;
  	      }
+ 
+ 	    if (atomicp)
+ 	      {
+ 		error_at (loc, "_Atomic type qualifier in array declarator");
+ 		type = error_mark_node;
+ 	      }
  	    break;
  	  }
  	case cdk_function:
*************** grokdeclarator (const struct c_declarato
*** 5651,5656 ****
--- 5666,5676 ----
  	      FOR_EACH_VEC_SAFE_ELT_REVERSE (arg_info->tags, ix, tag)
  		TYPE_CONTEXT (tag->type) = type;
  	    }
+ 	    if (atomicp)
+ 	      {
+ 		error_at (loc, "_Atomic type qualifier in function declarator");
+ 		type = error_mark_node;
+ 	      }
  	    break;
  	  }
  	case cdk_pointer:
*************** build_null_declspecs (void)
*** 8825,8830 ****
--- 8845,8851 ----
    ret->thread_p = false;
    ret->const_p = false;
    ret->volatile_p = false;
+   ret->atomic_p = false;
    ret->restrict_p = false;
    ret->saturating_p = false;
    ret->alignas_p = false;
*************** declspecs_add_qual (source_location loc,
*** 8886,8891 ****
--- 8907,8916 ----
        specs->restrict_p = true;
        specs->locations[cdw_restrict] = loc;
        break;
+     case RID_ATOMIC:
+       dupe = specs->atomic_p;
+       specs->atomic_p = true;
+       break;
      default:
        gcc_unreachable ();
      }
Index: gcc/c/c-parser.c
===================================================================
*** gcc/c/c-parser.c	(revision 201248)
--- gcc/c/c-parser.c	(working copy)
*************** c_token_starts_typename (c_token *token)
*** 489,494 ****
--- 489,495 ----
  	case RID_UNION:
  	case RID_TYPEOF:
  	case RID_CONST:
+ 	case RID_ATOMIC:
  	case RID_VOLATILE:
  	case RID_RESTRICT:
  	case RID_ATTRIBUTE:
*************** c_token_is_qualifier (c_token *token)
*** 571,576 ****
--- 572,578 ----
  	case RID_VOLATILE:
  	case RID_RESTRICT:
  	case RID_ATTRIBUTE:
+ 	case RID_ATOMIC:
  	  return true;
  	default:
  	  return false;
*************** c_token_starts_declspecs (c_token *token
*** 651,656 ****
--- 653,659 ----
  	case RID_ACCUM:
  	case RID_SAT:
  	case RID_ALIGNAS:
+ 	case RID_ATOMIC:
  	  return true;
  	default:
  	  return false;
*************** c_parser_static_assert_declaration_no_se
*** 1948,1955 ****
--- 1951,1960 ----
       restrict
       volatile
       address-space-qualifier
+      atomic
  
     (restrict is new in C99.)
+    (atomic is new in C11.)
  
     GNU extensions:
  
*************** c_parser_declspecs (c_parser *parser, st
*** 2171,2176 ****
--- 2176,2185 ----
  	  t = c_parser_typeof_specifier (parser);
  	  declspecs_add_type (loc, specs, t);
  	  break;
+ 	case RID_ATOMIC:
+ 	  if (!flag_isoc11)
+ 	    pedwarn (loc, 0, "_Atomic qualifier provided in ISO C11");
+ 	  /* Fallthru.  */
  	case RID_CONST:
  	case RID_VOLATILE:
  	case RID_RESTRICT:
*************** c_parser_attribute_any_word (c_parser *p
*** 3487,3492 ****
--- 3496,3502 ----
  	case RID_SAT:
  	case RID_TRANSACTION_ATOMIC:
  	case RID_TRANSACTION_CANCEL:
+ 	case RID_ATOMIC:
  	  ok = true;
  	  break;
  	default:
Index: gcc/c/c-typeck.c
===================================================================
*** gcc/c/c-typeck.c	(revision 201248)
--- gcc/c/c-typeck.c	(working copy)
*************** build_indirect_ref (location_t loc, tree
*** 2268,2276 ****
  	  /* A de-reference of a pointer to const is not a const.  It is valid
  	     to change it via some other pointer.  */
  	  TREE_READONLY (ref) = TYPE_READONLY (t);
! 	  TREE_SIDE_EFFECTS (ref)
! 	    = TYPE_VOLATILE (t) || TREE_SIDE_EFFECTS (pointer);
! 	  TREE_THIS_VOLATILE (ref) = TYPE_VOLATILE (t);
  	  protected_set_expr_location (ref, loc);
  	  return ref;
  	}
--- 2268,2276 ----
  	  /* A de-reference of a pointer to const is not a const.  It is valid
  	     to change it via some other pointer.  */
  	  TREE_READONLY (ref) = TYPE_READONLY (t);
! 	  TREE_SIDE_EFFECTS (ref) = (TYPE_VOLATILE (t) || TYPE_ATOMIC (t)
! 				     || TREE_SIDE_EFFECTS (pointer));
! 	  TREE_THIS_VOLATILE (ref) = (TYPE_VOLATILE (t) || TYPE_ATOMIC (t));
  	  protected_set_expr_location (ref, loc);
  	  return ref;
  	}
*************** build_array_ref (location_t loc, tree ar
*** 2408,2416 ****
--- 2408,2418 ----
  	    | TREE_READONLY (array));
        TREE_SIDE_EFFECTS (rval)
  	|= (TYPE_VOLATILE (TREE_TYPE (TREE_TYPE (array)))
+ 	    | TYPE_ATOMIC (TREE_TYPE (TREE_TYPE (array)))
  	    | TREE_SIDE_EFFECTS (array));
        TREE_THIS_VOLATILE (rval)
  	|= (TYPE_VOLATILE (TREE_TYPE (TREE_TYPE (array)))
+ 	    | TYPE_ATOMIC (TREE_TYPE (TREE_TYPE (array)))
  	    /* This was added by rms on 16 Nov 91.
  	       It fixes  vol struct foo *a;  a->elts[1]
  	       in an inline function.
*************** convert_for_assignment (location_t locat
*** 5578,5585 ****
  	  else if (TREE_CODE (ttr) != FUNCTION_TYPE
  		   && TREE_CODE (ttl) != FUNCTION_TYPE)
  	    {
! 	      if (TYPE_QUALS_NO_ADDR_SPACE (ttr)
! 		  & ~TYPE_QUALS_NO_ADDR_SPACE (ttl))
  		{
  		  WARN_FOR_QUALIFIERS (location, 0,
  				       G_("passing argument %d of %qE discards "
--- 5580,5589 ----
  	  else if (TREE_CODE (ttr) != FUNCTION_TYPE
  		   && TREE_CODE (ttl) != FUNCTION_TYPE)
  	    {
! 	      /* Assignments between atomic and non-atomic objects are OK since
! 	         the atomic access is always through an interface call.  */
! 	      if (TYPE_QUALS_NO_ADDR_SPACE_NO_ATOMIC (ttr)
! 		  & ~TYPE_QUALS_NO_ADDR_SPACE_NO_ATOMIC (ttl))
  		{
  		  WARN_FOR_QUALIFIERS (location, 0,
  				       G_("passing argument %d of %qE discards "
Index: gcc/objc/objc-act.c
===================================================================
*** gcc/objc/objc-act.c	(revision 201248)
--- gcc/objc/objc-act.c	(working copy)
*************** objc_push_parm (tree parm)
*** 8244,8249 ****
--- 8244,8250 ----
    c_apply_type_quals_to_decl
    ((TYPE_READONLY (TREE_TYPE (parm)) ? TYPE_QUAL_CONST : 0)
     | (TYPE_RESTRICT (TREE_TYPE (parm)) ? TYPE_QUAL_RESTRICT : 0)
+    | (TYPE_ATOMIC (TREE_TYPE (parm)) ? TYPE_QUAL_ATOMIC : 0)
     | (TYPE_VOLATILE (TREE_TYPE (parm)) ? TYPE_QUAL_VOLATILE : 0), parm);
  
    objc_parmlist = chainon (objc_parmlist, parm);
Index: gcc/cp/cp-tree.h
===================================================================
*** gcc/cp/cp-tree.h	(revision 201248)
--- gcc/cp/cp-tree.h	(working copy)
*************** enum languages { lang_c, lang_cplusplus,
*** 1293,1298 ****
--- 1293,1302 ----
  #define CP_TYPE_VOLATILE_P(NODE)			\
    ((cp_type_quals (NODE) & TYPE_QUAL_VOLATILE) != 0)
  
+ /* Nonzero if this type is atomic-qualified.  */
+ #define CP_TYPE_ATOMIC_P(NODE)				\
+   ((cp_type_quals (NODE) & TYPE_QUAL_ATOMIC) != 0)
+ 
  /* Nonzero if this type is restrict-qualified.  */
  #define CP_TYPE_RESTRICT_P(NODE)			\
    ((cp_type_quals (NODE) & TYPE_QUAL_RESTRICT) != 0)
*************** typedef enum cp_decl_spec {
*** 4758,4763 ****
--- 4762,4768 ----
    ds_const,
    ds_volatile,
    ds_restrict,
+   ds_atomic,
    ds_inline,
    ds_virtual,
    ds_explicit,
Index: gcc/cp/class.c
===================================================================
*** gcc/cp/class.c	(revision 201248)
--- gcc/cp/class.c	(working copy)
*************** build_simple_base_path (tree expr, tree
*** 542,548 ****
  	   to mark the expression itself.  */
  	if (type_quals & TYPE_QUAL_CONST)
  	  TREE_READONLY (expr) = 1;
! 	if (type_quals & TYPE_QUAL_VOLATILE)
  	  TREE_THIS_VOLATILE (expr) = 1;
  
  	return expr;
--- 542,548 ----
  	   to mark the expression itself.  */
  	if (type_quals & TYPE_QUAL_CONST)
  	  TREE_READONLY (expr) = 1;
! 	if (type_quals & (TYPE_QUAL_VOLATILE|TYPE_QUAL_ATOMIC))
  	  TREE_THIS_VOLATILE (expr) = 1;
  
  	return expr;
Index: gcc/cp/cvt.c
===================================================================
*** gcc/cp/cvt.c	(revision 201248)
--- gcc/cp/cvt.c	(working copy)
*************** diagnose_ref_binding (location_t loc, tr
*** 385,390 ****
--- 385,396 ----
        else if (CP_TYPE_VOLATILE_P (ttl))
  	msg = G_("conversion to volatile reference type %q#T "
  	         "from rvalue of type %qT");
+       else if (CP_TYPE_ATOMIC_P (ttl) && decl)
+ 	msg = G_("initialization of atomic reference type %q#T from "
+ 	         "rvalue of type %qT");
+       else if (CP_TYPE_ATOMIC_P (ttl))
+ 	msg = G_("conversion to atomic reference type %q#T "
+ 	         "from rvalue of type %qT");
        else if (decl)
  	msg = G_("initialization of non-const reference type %q#T from "
  	         "rvalue of type %qT");
*************** convert_from_reference (tree val)
*** 537,543 ****
  	  so that we get the proper error message if the result is used
  	  to assign to.  Also, &* is supposed to be a no-op.  */
        TREE_READONLY (ref) = CP_TYPE_CONST_P (t);
!       TREE_THIS_VOLATILE (ref) = CP_TYPE_VOLATILE_P (t);
        TREE_SIDE_EFFECTS (ref)
  	= (TREE_THIS_VOLATILE (ref) || TREE_SIDE_EFFECTS (val));
        val = ref;
--- 543,550 ----
  	  so that we get the proper error message if the result is used
  	  to assign to.  Also, &* is supposed to be a no-op.  */
        TREE_READONLY (ref) = CP_TYPE_CONST_P (t);
!       TREE_THIS_VOLATILE (ref) = (CP_TYPE_VOLATILE_P (t) 
! 				  || CP_TYPE_ATOMIC_P (t));
        TREE_SIDE_EFFECTS (ref)
  	= (TREE_THIS_VOLATILE (ref) || TREE_SIDE_EFFECTS (val));
        val = ref;
*************** convert_to_void (tree expr, impl_conv_vo
*** 1010,1016 ****
  	tree type = TREE_TYPE (expr);
  	int is_reference = TREE_CODE (TREE_TYPE (TREE_OPERAND (expr, 0)))
  			   == REFERENCE_TYPE;
! 	int is_volatile = TYPE_VOLATILE (type);
  	int is_complete = COMPLETE_TYPE_P (complete_type (type));
  
  	/* Can't load the value if we don't know the type.  */
--- 1017,1023 ----
  	tree type = TREE_TYPE (expr);
  	int is_reference = TREE_CODE (TREE_TYPE (TREE_OPERAND (expr, 0)))
  			   == REFERENCE_TYPE;
! 	int is_volatile = (TYPE_VOLATILE (type) || TYPE_ATOMIC (type));
  	int is_complete = COMPLETE_TYPE_P (complete_type (type));
  
  	/* Can't load the value if we don't know the type.  */
*************** convert_to_void (tree expr, impl_conv_vo
*** 1170,1176 ****
  	tree type = TREE_TYPE (expr);
  	int is_complete = COMPLETE_TYPE_P (complete_type (type));
  
! 	if (TYPE_VOLATILE (type) && !is_complete && (complain & tf_warning))
  	  switch (implicit)
  	    {
  	      case ICV_CAST:
--- 1177,1184 ----
  	tree type = TREE_TYPE (expr);
  	int is_complete = COMPLETE_TYPE_P (complete_type (type));
  
! 	if ((TYPE_VOLATILE (type) || TYPE_ATOMIC (type))
! 	    && !is_complete && (complain & tf_warning))
  	  switch (implicit)
  	    {
  	      case ICV_CAST:
Index: gcc/cp/decl.c
===================================================================
*** gcc/cp/decl.c	(revision 201248)
--- gcc/cp/decl.c	(working copy)
*************** grokfndecl (tree ctype,
*** 7368,7374 ****
    for (t = parms; t; t = DECL_CHAIN (t))
      DECL_CONTEXT (t) = decl;
    /* Propagate volatile out from type to decl.  */
!   if (TYPE_VOLATILE (type))
      TREE_THIS_VOLATILE (decl) = 1;
  
    /* Setup decl according to sfk.  */
--- 7368,7374 ----
    for (t = parms; t; t = DECL_CHAIN (t))
      DECL_CONTEXT (t) = decl;
    /* Propagate volatile out from type to decl.  */
!   if (TYPE_VOLATILE (type) || TYPE_ATOMIC (type))
      TREE_THIS_VOLATILE (decl) = 1;
  
    /* Setup decl according to sfk.  */
*************** build_ptrmemfunc_type (tree type)
*** 7984,7989 ****
--- 7984,7990 ----
        TYPE_READONLY (t) = (type_quals & TYPE_QUAL_CONST) != 0;
        TYPE_VOLATILE (t) = (type_quals & TYPE_QUAL_VOLATILE) != 0;
        TYPE_RESTRICT (t) = (type_quals & TYPE_QUAL_RESTRICT) != 0;
+       TYPE_ATOMIC (t) = (type_quals & TYPE_QUAL_ATOMIC) != 0;
        TYPE_MAIN_VARIANT (t) = unqualified_variant;
        TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (unqualified_variant);
        TYPE_NEXT_VARIANT (unqualified_variant) = t;
*************** grokdeclarator (const cp_declarator *dec
*** 9199,9204 ****
--- 9200,9207 ----
      type_quals |= TYPE_QUAL_VOLATILE;
    if (decl_spec_seq_has_spec_p (declspecs, ds_restrict))
      type_quals |= TYPE_QUAL_RESTRICT;
+   if (decl_spec_seq_has_spec_p (declspecs, ds_atomic))
+     type_quals |= TYPE_QUAL_ATOMIC;
    if (sfk == sfk_conversion && type_quals != TYPE_UNQUALIFIED)
      error ("qualifiers are not allowed on declaration of %<operator %T%>",
  	   ctor_return_type);
Index: gcc/cp/mangle.c
===================================================================
*** gcc/cp/mangle.c	(revision 201248)
--- gcc/cp/mangle.c	(working copy)
*************** dump_substitution_candidates (void)
*** 326,331 ****
--- 326,332 ----
        if (TYPE_P (el) &&
  	  (CP_TYPE_RESTRICT_P (el)
  	   || CP_TYPE_VOLATILE_P (el)
+ 	   || CP_TYPE_ATOMIC_P (el)
  	   || CP_TYPE_CONST_P (el)))
  	fprintf (stderr, "CV-");
        fprintf (stderr, "%s (%s at %p)\n",
Index: gcc/cp/parser.c
===================================================================
*** gcc/cp/parser.c	(revision 201248)
--- gcc/cp/parser.c	(working copy)
*************** cp_parser_type_specifier (cp_parser* par
*** 14090,14095 ****
--- 14090,14103 ----
  	*is_cv_qualifier = true;
        break;
  
+     case RID_ATOMIC:
+       ds = ds_atomic;
+       if (is_cv_qualifier)
+ 	*is_cv_qualifier = true;
+       if (!flag_isoc11)
+         pedwarn (token->location, 0, "_Atomic qualifier provided in ISO C11");
+       break;
+ 
      case RID_RESTRICT:
        ds = ds_restrict;
        if (is_cv_qualifier)
*************** cp_parser_cv_qualifier_seq_opt (cp_parse
*** 17341,17346 ****
--- 17349,17358 ----
  	  cv_qualifier = TYPE_QUAL_RESTRICT;
  	  break;
  
+ 	case RID_ATOMIC:
+ 	  cv_qualifier = TYPE_QUAL_ATOMIC;
+ 	  break;
+ 
  	default:
  	  cv_qualifier = TYPE_UNQUALIFIED;
  	  break;
*************** set_and_check_decl_spec_loc (cp_decl_spe
*** 23477,23482 ****
--- 23489,23495 ----
  	    "const",
  	    "volatile",
  	    "restrict",
+ 	    "atomic"
  	    "inline",
  	    "virtual",
  	    "explicit",
Index: gcc/cp/pt.c
===================================================================
*** gcc/cp/pt.c	(revision 201248)
--- gcc/cp/pt.c	(working copy)
*************** check_cv_quals_for_unify (int strict, tr
*** 16375,16381 ****
        if ((TREE_CODE (arg) == REFERENCE_TYPE
  	   || TREE_CODE (arg) == FUNCTION_TYPE
  	   || TREE_CODE (arg) == METHOD_TYPE)
! 	  && (parm_quals & (TYPE_QUAL_CONST | TYPE_QUAL_VOLATILE)))
  	return 0;
  
        if ((!POINTER_TYPE_P (arg) && TREE_CODE (arg) != TEMPLATE_TYPE_PARM)
--- 16375,16382 ----
        if ((TREE_CODE (arg) == REFERENCE_TYPE
  	   || TREE_CODE (arg) == FUNCTION_TYPE
  	   || TREE_CODE (arg) == METHOD_TYPE)
! 	  && (parm_quals & (TYPE_QUAL_CONST | TYPE_QUAL_VOLATILE 
! 			    | TYPE_QUAL_ATOMIC)))
  	return 0;
  
        if ((!POINTER_TYPE_P (arg) && TREE_CODE (arg) != TEMPLATE_TYPE_PARM)
Index: gcc/cp/rtti.c
===================================================================
*** gcc/cp/rtti.c	(revision 201248)
--- gcc/cp/rtti.c	(working copy)
*************** qualifier_flags (tree type)
*** 808,813 ****
--- 808,815 ----
      flags |= 2;
    if (quals & TYPE_QUAL_RESTRICT)
      flags |= 4;
+   if (quals & TYPE_QUAL_ATOMIC)
+     flags |= 8;
    return flags;
  }
  
Index: gcc/cp/semantics.c
===================================================================
*** gcc/cp/semantics.c	(revision 201248)
--- gcc/cp/semantics.c	(working copy)
*************** non_const_var_error (tree r)
*** 7776,7781 ****
--- 7776,7784 ----
        else if (CP_TYPE_VOLATILE_P (type))
  	inform (DECL_SOURCE_LOCATION (r),
  		"%q#D is volatile", r);
+       else if (CP_TYPE_ATOMIC_P (type))
+ 	inform (DECL_SOURCE_LOCATION (r),
+ 		"%q#D is atomic", r);
        else if (!DECL_INITIAL (r)
  	       || !TREE_CONSTANT (DECL_INITIAL (r)))
  	inform (DECL_SOURCE_LOCATION (r),
Index: gcc/cp/tree.c
===================================================================
*** gcc/cp/tree.c	(revision 201248)
--- gcc/cp/tree.c	(working copy)
*************** cp_build_qualified_type_real (tree type,
*** 1059,1072 ****
    /* A reference or method type shall not be cv-qualified.
       [dcl.ref], [dcl.fct].  This used to be an error, but as of DR 295
       (in CD1) we always ignore extra cv-quals on functions.  */
!   if (type_quals & (TYPE_QUAL_CONST | TYPE_QUAL_VOLATILE)
        && (TREE_CODE (type) == REFERENCE_TYPE
  	  || TREE_CODE (type) == FUNCTION_TYPE
  	  || TREE_CODE (type) == METHOD_TYPE))
      {
        if (TREE_CODE (type) == REFERENCE_TYPE)
! 	bad_quals |= type_quals & (TYPE_QUAL_CONST | TYPE_QUAL_VOLATILE);
!       type_quals &= ~(TYPE_QUAL_CONST | TYPE_QUAL_VOLATILE);
      }
  
    /* But preserve any function-cv-quals on a FUNCTION_TYPE.  */
--- 1059,1073 ----
    /* A reference or method type shall not be cv-qualified.
       [dcl.ref], [dcl.fct].  This used to be an error, but as of DR 295
       (in CD1) we always ignore extra cv-quals on functions.  */
!   if (type_quals & (TYPE_QUAL_CONST | TYPE_QUAL_VOLATILE | TYPE_QUAL_ATOMIC)
        && (TREE_CODE (type) == REFERENCE_TYPE
  	  || TREE_CODE (type) == FUNCTION_TYPE
  	  || TREE_CODE (type) == METHOD_TYPE))
      {
        if (TREE_CODE (type) == REFERENCE_TYPE)
! 	bad_quals |= type_quals 
! 		    & (TYPE_QUAL_CONST | TYPE_QUAL_VOLATILE | TYPE_QUAL_ATOMIC);
!       type_quals &= ~(TYPE_QUAL_CONST | TYPE_QUAL_VOLATILE | TYPE_QUAL_ATOMIC);
      }
  
    /* But preserve any function-cv-quals on a FUNCTION_TYPE.  */
*************** cv_unqualified (tree type)
*** 1142,1148 ****
      return type;
  
    quals = cp_type_quals (type);
!   quals &= ~(TYPE_QUAL_CONST|TYPE_QUAL_VOLATILE);
    return cp_build_qualified_type (type, quals);
  }
  
--- 1143,1149 ----
      return type;
  
    quals = cp_type_quals (type);
!   quals &= ~(TYPE_QUAL_CONST|TYPE_QUAL_VOLATILE|TYPE_QUAL_ATOMIC);
    return cp_build_qualified_type (type, quals);
  }
  
Index: gcc/cp/typeck.c
===================================================================
*** gcc/cp/typeck.c	(revision 201248)
--- gcc/cp/typeck.c	(working copy)
*************** build_class_member_access_expr (tree obj
*** 2417,2423 ****
  	 expression itself.  */
        if (type_quals & TYPE_QUAL_CONST)
  	TREE_READONLY (result) = 1;
!       if (type_quals & TYPE_QUAL_VOLATILE)
  	TREE_THIS_VOLATILE (result) = 1;
      }
    else if (BASELINK_P (member))
--- 2417,2423 ----
  	 expression itself.  */
        if (type_quals & TYPE_QUAL_CONST)
  	TREE_READONLY (result) = 1;
!       if (type_quals & (TYPE_QUAL_VOLATILE|TYPE_QUAL_ATOMIC))
  	TREE_THIS_VOLATILE (result) = 1;
      }
    else if (BASELINK_P (member))
*************** cp_build_indirect_ref (tree ptr, ref_ope
*** 2941,2947 ****
  	     so that we get the proper error message if the result is used
  	     to assign to.  Also, &* is supposed to be a no-op.  */
  	  TREE_READONLY (ref) = CP_TYPE_CONST_P (t);
! 	  TREE_THIS_VOLATILE (ref) = CP_TYPE_VOLATILE_P (t);
  	  TREE_SIDE_EFFECTS (ref)
  	    = (TREE_THIS_VOLATILE (ref) || TREE_SIDE_EFFECTS (pointer));
  	  return ref;
--- 2941,2948 ----
  	     so that we get the proper error message if the result is used
  	     to assign to.  Also, &* is supposed to be a no-op.  */
  	  TREE_READONLY (ref) = CP_TYPE_CONST_P (t);
! 	  TREE_THIS_VOLATILE (ref) = (CP_TYPE_VOLATILE_P (t) 
! 				      || CP_TYPE_ATOMIC_P (t));
  	  TREE_SIDE_EFFECTS (ref)
  	    = (TREE_THIS_VOLATILE (ref) || TREE_SIDE_EFFECTS (pointer));
  	  return ref;
*************** cp_build_array_ref (location_t loc, tree
*** 3126,3134 ****
        TREE_READONLY (rval)
  	|= (CP_TYPE_CONST_P (type) | TREE_READONLY (array));
        TREE_SIDE_EFFECTS (rval)
! 	|= (CP_TYPE_VOLATILE_P (type) | TREE_SIDE_EFFECTS (array));
        TREE_THIS_VOLATILE (rval)
! 	|= (CP_TYPE_VOLATILE_P (type) | TREE_THIS_VOLATILE (array));
        ret = require_complete_type_sfinae (fold_if_not_in_template (rval),
  					  complain);
        protected_set_expr_location (ret, loc);
--- 3127,3137 ----
        TREE_READONLY (rval)
  	|= (CP_TYPE_CONST_P (type) | TREE_READONLY (array));
        TREE_SIDE_EFFECTS (rval)
! 	|= (CP_TYPE_VOLATILE_P (type) | TREE_SIDE_EFFECTS (array)
! 	    | CP_TYPE_ATOMIC_P (type));
        TREE_THIS_VOLATILE (rval)
! 	|= (CP_TYPE_VOLATILE_P (type) | TREE_THIS_VOLATILE (array)
! 	    | CP_TYPE_ATOMIC_P (type));
        ret = require_complete_type_sfinae (fold_if_not_in_template (rval),
  					  complain);
        protected_set_expr_location (ret, loc);
*************** check_return_expr (tree retval, bool *no
*** 8423,8429 ****
       && same_type_p ((TYPE_MAIN_VARIANT (TREE_TYPE (retval))),
                       (TYPE_MAIN_VARIANT (functype)))
       /* And the returned value must be non-volatile.  */
!      && ! TYPE_VOLATILE (TREE_TYPE (retval)));
       
    if (fn_returns_value_p && flag_elide_constructors)
      {
--- 8426,8433 ----
       && same_type_p ((TYPE_MAIN_VARIANT (TREE_TYPE (retval))),
                       (TYPE_MAIN_VARIANT (functype)))
       /* And the returned value must be non-volatile.  */
!      && ! TYPE_VOLATILE (TREE_TYPE (retval)) 
!      && ! TYPE_ATOMIC (TREE_TYPE (retval)));
       
    if (fn_returns_value_p && flag_elide_constructors)
      {
*************** cp_type_quals (const_tree type)
*** 8691,8697 ****
    /* METHOD and REFERENCE_TYPEs should never have quals.  */
    gcc_assert ((TREE_CODE (type) != METHOD_TYPE
  	       && TREE_CODE (type) != REFERENCE_TYPE)
! 	      || ((quals & (TYPE_QUAL_CONST|TYPE_QUAL_VOLATILE))
  		  == TYPE_UNQUALIFIED));
    return quals;
  }
--- 8695,8701 ----
    /* METHOD and REFERENCE_TYPEs should never have quals.  */
    gcc_assert ((TREE_CODE (type) != METHOD_TYPE
  	       && TREE_CODE (type) != REFERENCE_TYPE)
! 	    || ((quals & (TYPE_QUAL_CONST|TYPE_QUAL_VOLATILE|TYPE_QUAL_ATOMIC))
  		  == TYPE_UNQUALIFIED));
    return quals;
  }
*************** bool
*** 8751,8757 ****
  cv_qualified_p (const_tree type)
  {
    int quals = cp_type_quals (type);
!   return (quals & (TYPE_QUAL_CONST|TYPE_QUAL_VOLATILE)) != 0;
  }
  
  /* Returns nonzero if the TYPE contains a mutable member.  */
--- 8755,8761 ----
  cv_qualified_p (const_tree type)
  {
    int quals = cp_type_quals (type);
!   return (quals & (TYPE_QUAL_CONST|TYPE_QUAL_VOLATILE|TYPE_QUAL_ATOMIC)) != 0;
  }
  
  /* Returns nonzero if the TYPE contains a mutable member.  */
Index: libstdc++-v3/include/bits/atomic_base.h
===================================================================
*** libstdc++-v3/include/bits/atomic_base.h	(revision 201248)
--- libstdc++-v3/include/bits/atomic_base.h	(working copy)
*************** _GLIBCXX_BEGIN_NAMESPACE_VERSION
*** 346,361 ****
    // atomic_char32_t char32_t
    // atomic_wchar_t  wchar_t
    //
!   // NB: Assuming _ITp is an integral scalar type that is 1, 2, 4, or
!   // 8 bytes, since that is what GCC built-in functions for atomic
    // memory access expect.
    template<typename _ITp>
      struct __atomic_base
      {
      private:
!       typedef _ITp 	__int_type;
  
!       __int_type 	_M_i;
  
      public:
        __atomic_base() noexcept = default;
--- 346,362 ----
    // atomic_char32_t char32_t
    // atomic_wchar_t  wchar_t
    //
!   // NB: Assuming _ITp is an integral scalar type that is 1, 2, 4, 8, or
!   // 16 bytes, since that is what GCC built-in functions for atomic
    // memory access expect.
    template<typename _ITp>
      struct __atomic_base
      {
      private:
!       typedef _ITp 				__int_type;
!       typedef _ITp __attribute__ ((atomic))	__atomic_int_type;
  
!       __atomic_int_type _M_i;
  
      public:
        __atomic_base() noexcept = default;
*************** _GLIBCXX_BEGIN_NAMESPACE_VERSION
*** 669,677 ****
      struct __atomic_base<_PTp*>
      {
      private:
!       typedef _PTp* 	__pointer_type;
  
!       __pointer_type 	_M_p;
  
        // Factored out to facilitate explicit specialization.
        constexpr ptrdiff_t
--- 670,679 ----
      struct __atomic_base<_PTp*>
      {
      private:
!       typedef _PTp* 				__pointer_type;
!       typedef _PTp* __attribute ((atomic)) 	__atomic_pointer_type;
  
!       __atomic_pointer_type 	_M_p;
  
        // Factored out to facilitate explicit specialization.
        constexpr ptrdiff_t
Index: libstdc++-v3/include/std/atomic
===================================================================
*** libstdc++-v3/include/std/atomic	(revision 201248)
--- libstdc++-v3/include/std/atomic	(working copy)
*************** _GLIBCXX_BEGIN_NAMESPACE_VERSION
*** 161,167 ****
      struct atomic
      {
      private:
!       _Tp _M_i;
  
      public:
        atomic() noexcept = default;
--- 161,167 ----
      struct atomic
      {
      private:
!       _Tp __attribute ((atomic)) _M_i;
  
      public:
        atomic() noexcept = default;
Index: gcc/fortran/types.def
===================================================================
*** gcc/fortran/types.def	(revision 201248)
--- gcc/fortran/types.def	(working copy)
*************** DEF_PRIMITIVE_TYPE (BT_CONST_VOLATILE_PT
*** 74,79 ****
--- 74,87 ----
  		    build_pointer_type
  		     (build_qualified_type (void_type_node,
  					  TYPE_QUAL_VOLATILE|TYPE_QUAL_CONST)))
+ DEF_PRIMITIVE_TYPE (BT_ATOMIC_PTR,
+                     build_pointer_type
+                      (build_qualified_type (void_type_node,
+                                           TYPE_QUAL_VOLATILE|TYPE_QUAL_ATOMIC)))
+ DEF_PRIMITIVE_TYPE (BT_CONST_ATOMIC_PTR,
+                     build_pointer_type
+                      (build_qualified_type (void_type_node,
+                           TYPE_QUAL_VOLATILE|TYPE_QUAL_CONST|TYPE_QUAL_ATOMIC)))
  DEF_POINTER_TYPE (BT_PTR_LONG, BT_LONG)
  DEF_POINTER_TYPE (BT_PTR_ULONGLONG, BT_ULONGLONG)
  DEF_POINTER_TYPE (BT_PTR_PTR, BT_PTR)
*************** DEF_FUNCTION_TYPE_2 (BT_FN_I8_CONST_VPTR
*** 113,122 ****
  		     BT_INT)
  DEF_FUNCTION_TYPE_2 (BT_FN_I16_CONST_VPTR_INT, BT_I16, BT_CONST_VOLATILE_PTR,
  		     BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_VOID_VPTR_INT, BT_VOID, BT_VOLATILE_PTR, BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_VPTR_INT, BT_BOOL, BT_VOLATILE_PTR, BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_SIZE_CONST_VPTR, BT_BOOL, BT_SIZE,
! 		     BT_CONST_VOLATILE_PTR)
  
  
  DEF_POINTER_TYPE (BT_PTR_FN_VOID_PTR_PTR, BT_FN_VOID_PTR_PTR)
--- 121,140 ----
  		     BT_INT)
  DEF_FUNCTION_TYPE_2 (BT_FN_I16_CONST_VPTR_INT, BT_I16, BT_CONST_VOLATILE_PTR,
  		     BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I1_CONST_APTR_INT, BT_I1, BT_CONST_ATOMIC_PTR, 
!                      BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I2_CONST_APTR_INT, BT_I2, BT_CONST_ATOMIC_PTR,
!                      BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I4_CONST_APTR_INT, BT_I4, BT_CONST_ATOMIC_PTR,
!                      BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I8_CONST_APTR_INT, BT_I8, BT_CONST_ATOMIC_PTR,
!                      BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I16_CONST_APTR_INT, BT_I16, BT_CONST_ATOMIC_PTR,
!                      BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_VOID_APTR_INT, BT_VOID, BT_ATOMIC_PTR, BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_APTR_INT, BT_BOOL, BT_ATOMIC_PTR, BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_SIZE_CONST_APTR, BT_BOOL, BT_SIZE,
! 		     BT_CONST_ATOMIC_PTR)
  
  
  DEF_POINTER_TYPE (BT_PTR_FN_VOID_PTR_PTR, BT_FN_VOID_PTR_PTR)
*************** DEF_FUNCTION_TYPE_3 (BT_FN_I2_VPTR_I2_IN
*** 144,169 ****
  DEF_FUNCTION_TYPE_3 (BT_FN_I4_VPTR_I4_INT, BT_I4, BT_VOLATILE_PTR, BT_I4, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_I8_VPTR_I8_INT, BT_I8, BT_VOLATILE_PTR, BT_I8, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_I16_VPTR_I16_INT, BT_I16, BT_VOLATILE_PTR, BT_I16, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I1_INT, BT_VOID, BT_VOLATILE_PTR, BT_I1, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I2_INT, BT_VOID, BT_VOLATILE_PTR, BT_I2, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I4_INT, BT_VOID, BT_VOLATILE_PTR, BT_I4, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I8_INT, BT_VOID, BT_VOLATILE_PTR, BT_I8, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I16_INT, BT_VOID, BT_VOLATILE_PTR, BT_I16, BT_INT)
  
  DEF_FUNCTION_TYPE_4 (BT_FN_VOID_OMPFN_PTR_UINT_UINT,
                       BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT, BT_UINT)
  DEF_FUNCTION_TYPE_4 (BT_FN_VOID_PTR_WORD_WORD_PTR,
  		     BT_VOID, BT_PTR, BT_WORD, BT_WORD, BT_PTR)
! DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_VPTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_VOLATILE_PTR, BT_PTR, BT_INT)
! DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_CONST_VPTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_CONST_VOLATILE_PTR, BT_PTR, BT_INT)
  
  DEF_FUNCTION_TYPE_5 (BT_FN_BOOL_LONG_LONG_LONG_LONGPTR_LONGPTR,
                       BT_BOOL, BT_LONG, BT_LONG, BT_LONG,
  		     BT_PTR_LONG, BT_PTR_LONG)
! DEF_FUNCTION_TYPE_5 (BT_FN_VOID_SIZE_VPTR_PTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_VOLATILE_PTR, BT_PTR, BT_PTR, BT_INT)
  
  DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR,
                       BT_BOOL, BT_LONG, BT_LONG, BT_LONG, BT_LONG,
--- 162,197 ----
  DEF_FUNCTION_TYPE_3 (BT_FN_I4_VPTR_I4_INT, BT_I4, BT_VOLATILE_PTR, BT_I4, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_I8_VPTR_I8_INT, BT_I8, BT_VOLATILE_PTR, BT_I8, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_I16_VPTR_I16_INT, BT_I16, BT_VOLATILE_PTR, BT_I16, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I1_APTR_I1_INT, BT_I1, BT_ATOMIC_PTR, BT_I1, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I2_APTR_I2_INT, BT_I2, BT_ATOMIC_PTR, BT_I2, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I4_APTR_I4_INT, BT_I4, BT_ATOMIC_PTR, BT_I4, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I8_APTR_I8_INT, BT_I8, BT_ATOMIC_PTR, BT_I8, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I16_APTR_I16_INT, BT_I16, BT_ATOMIC_PTR, BT_I16, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I1_INT, BT_VOID, BT_VOLATILE_PTR, BT_I1, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I2_INT, BT_VOID, BT_VOLATILE_PTR, BT_I2, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I4_INT, BT_VOID, BT_VOLATILE_PTR, BT_I4, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I8_INT, BT_VOID, BT_VOLATILE_PTR, BT_I8, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I16_INT, BT_VOID, BT_VOLATILE_PTR, BT_I16, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I1_INT, BT_VOID, BT_ATOMIC_PTR, BT_I1, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I2_INT, BT_VOID, BT_ATOMIC_PTR, BT_I2, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I4_INT, BT_VOID, BT_ATOMIC_PTR, BT_I4, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I8_INT, BT_VOID, BT_ATOMIC_PTR, BT_I8, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I16_INT, BT_VOID, BT_ATOMIC_PTR, BT_I16, BT_INT)
  
  DEF_FUNCTION_TYPE_4 (BT_FN_VOID_OMPFN_PTR_UINT_UINT,
                       BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT, BT_UINT)
  DEF_FUNCTION_TYPE_4 (BT_FN_VOID_PTR_WORD_WORD_PTR,
  		     BT_VOID, BT_PTR, BT_WORD, BT_WORD, BT_PTR)
! DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_APTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_ATOMIC_PTR, BT_PTR, BT_INT)
! DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_CONST_APTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_CONST_ATOMIC_PTR, BT_PTR, BT_INT)
  
  DEF_FUNCTION_TYPE_5 (BT_FN_BOOL_LONG_LONG_LONG_LONGPTR_LONGPTR,
                       BT_BOOL, BT_LONG, BT_LONG, BT_LONG,
  		     BT_PTR_LONG, BT_PTR_LONG)
! DEF_FUNCTION_TYPE_5 (BT_FN_VOID_SIZE_APTR_PTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_ATOMIC_PTR, BT_PTR, BT_PTR, BT_INT)
  
  DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR,
                       BT_BOOL, BT_LONG, BT_LONG, BT_LONG, BT_LONG,
*************** DEF_FUNCTION_TYPE_6 (BT_FN_VOID_OMPFN_PT
*** 174,196 ****
  DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULLPTR_ULLPTR,
  		     BT_BOOL, BT_BOOL, BT_ULONGLONG, BT_ULONGLONG,
  		     BT_ULONGLONG, BT_PTR_ULONGLONG, BT_PTR_ULONGLONG)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I1_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I1, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I2_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I2, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I4_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I4, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I8_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I8, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I16_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I16, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_SIZE_VPTR_PTR_PTR_INT_INT, BT_BOOL, BT_SIZE,
! 		     BT_VOLATILE_PTR, BT_PTR, BT_PTR, BT_INT, BT_INT)
  
  DEF_FUNCTION_TYPE_7 (BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG,
                       BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT,
--- 202,224 ----
  DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULLPTR_ULLPTR,
  		     BT_BOOL, BT_BOOL, BT_ULONGLONG, BT_ULONGLONG,
  		     BT_ULONGLONG, BT_PTR_ULONGLONG, BT_PTR_ULONGLONG)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I1_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I1, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I2_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I2, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I4_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I4, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I8_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I8, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I16_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I16, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_SIZE_APTR_PTR_PTR_INT_INT, BT_BOOL, BT_SIZE,
! 		     BT_ATOMIC_PTR, BT_PTR, BT_PTR, BT_INT, BT_INT)
  
  DEF_FUNCTION_TYPE_7 (BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG,
                       BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT,
Index: gcc/builtin-types.def
===================================================================
*** gcc/builtin-types.def	(revision 201248)
--- gcc/builtin-types.def	(working copy)
*************** DEF_PRIMITIVE_TYPE (BT_CONST_VOLATILE_PT
*** 99,104 ****
--- 99,112 ----
  		    build_pointer_type
  		     (build_qualified_type (void_type_node,
  					  TYPE_QUAL_VOLATILE|TYPE_QUAL_CONST)))
+ DEF_PRIMITIVE_TYPE (BT_ATOMIC_PTR,
+ 		    build_pointer_type
+ 		     (build_qualified_type (void_type_node,
+ 					  TYPE_QUAL_VOLATILE|TYPE_QUAL_ATOMIC)))
+ DEF_PRIMITIVE_TYPE (BT_CONST_ATOMIC_PTR,
+ 		    build_pointer_type
+ 		     (build_qualified_type (void_type_node,
+ 			  TYPE_QUAL_VOLATILE|TYPE_QUAL_CONST|TYPE_QUAL_ATOMIC)))
  DEF_PRIMITIVE_TYPE (BT_PTRMODE, (*lang_hooks.types.type_for_mode)(ptr_mode, 0))
  DEF_PRIMITIVE_TYPE (BT_INT_PTR, integer_ptr_type_node)
  DEF_PRIMITIVE_TYPE (BT_FLOAT_PTR, float_ptr_type_node)
*************** DEF_FUNCTION_TYPE_1 (BT_FN_DFLOAT32_DFLO
*** 223,228 ****
--- 231,237 ----
  DEF_FUNCTION_TYPE_1 (BT_FN_DFLOAT64_DFLOAT64, BT_DFLOAT64, BT_DFLOAT64)
  DEF_FUNCTION_TYPE_1 (BT_FN_DFLOAT128_DFLOAT128, BT_DFLOAT128, BT_DFLOAT128)
  DEF_FUNCTION_TYPE_1 (BT_FN_VOID_VPTR, BT_VOID, BT_VOLATILE_PTR)
+ DEF_FUNCTION_TYPE_1 (BT_FN_VOID_APTR, BT_VOID, BT_ATOMIC_PTR)
  DEF_FUNCTION_TYPE_1 (BT_FN_VOID_PTRPTR, BT_VOID, BT_PTR_PTR)
  DEF_FUNCTION_TYPE_1 (BT_FN_UINT_UINT, BT_UINT, BT_UINT)
  DEF_FUNCTION_TYPE_1 (BT_FN_ULONG_ULONG, BT_ULONG, BT_ULONG)
*************** DEF_FUNCTION_TYPE_2 (BT_FN_I8_CONST_VPTR
*** 337,346 ****
  		     BT_INT)
  DEF_FUNCTION_TYPE_2 (BT_FN_I16_CONST_VPTR_INT, BT_I16, BT_CONST_VOLATILE_PTR,
  		     BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_VOID_VPTR_INT, BT_VOID, BT_VOLATILE_PTR, BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_VPTR_INT, BT_BOOL, BT_VOLATILE_PTR, BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_SIZE_CONST_VPTR, BT_BOOL, BT_SIZE,
! 		     BT_CONST_VOLATILE_PTR)
  
  DEF_POINTER_TYPE (BT_PTR_FN_VOID_PTR_PTR, BT_FN_VOID_PTR_PTR)
  
--- 346,365 ----
  		     BT_INT)
  DEF_FUNCTION_TYPE_2 (BT_FN_I16_CONST_VPTR_INT, BT_I16, BT_CONST_VOLATILE_PTR,
  		     BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I1_CONST_APTR_INT, BT_I1, BT_CONST_ATOMIC_PTR,
! 		     BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I2_CONST_APTR_INT, BT_I2, BT_CONST_ATOMIC_PTR,
! 		     BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I4_CONST_APTR_INT, BT_I4, BT_CONST_ATOMIC_PTR,
! 		     BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I8_CONST_APTR_INT, BT_I8, BT_CONST_ATOMIC_PTR,
! 		     BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_I16_CONST_APTR_INT, BT_I16, BT_CONST_ATOMIC_PTR,
! 		     BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_VOID_APTR_INT, BT_VOID, BT_ATOMIC_PTR, BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_APTR_INT, BT_BOOL, BT_ATOMIC_PTR, BT_INT)
! DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_SIZE_CONST_APTR, BT_BOOL, BT_SIZE,
! 		     BT_CONST_ATOMIC_PTR)
  
  DEF_POINTER_TYPE (BT_PTR_FN_VOID_PTR_PTR, BT_FN_VOID_PTR_PTR)
  
*************** DEF_FUNCTION_TYPE_3 (BT_FN_I2_VPTR_I2_IN
*** 420,430 ****
--- 439,460 ----
  DEF_FUNCTION_TYPE_3 (BT_FN_I4_VPTR_I4_INT, BT_I4, BT_VOLATILE_PTR, BT_I4, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_I8_VPTR_I8_INT, BT_I8, BT_VOLATILE_PTR, BT_I8, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_I16_VPTR_I16_INT, BT_I16, BT_VOLATILE_PTR, BT_I16, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I1_APTR_I1_INT, BT_I1, BT_ATOMIC_PTR, BT_I1, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I2_APTR_I2_INT, BT_I2, BT_ATOMIC_PTR, BT_I2, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I4_APTR_I4_INT, BT_I4, BT_ATOMIC_PTR, BT_I4, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I8_APTR_I8_INT, BT_I8, BT_ATOMIC_PTR, BT_I8, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_I16_APTR_I16_INT, BT_I16, BT_ATOMIC_PTR, BT_I16, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I1_INT, BT_VOID, BT_VOLATILE_PTR, BT_I1, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I2_INT, BT_VOID, BT_VOLATILE_PTR, BT_I2, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I4_INT, BT_VOID, BT_VOLATILE_PTR, BT_I4, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I8_INT, BT_VOID, BT_VOLATILE_PTR, BT_I8, BT_INT)
  DEF_FUNCTION_TYPE_3 (BT_FN_VOID_VPTR_I16_INT, BT_VOID, BT_VOLATILE_PTR, BT_I16, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I1_INT, BT_VOID, BT_ATOMIC_PTR, BT_I1, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I2_INT, BT_VOID, BT_ATOMIC_PTR, BT_I2, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I4_INT, BT_VOID, BT_ATOMIC_PTR, BT_I4, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I8_INT, BT_VOID, BT_ATOMIC_PTR, BT_I8, BT_INT)
+ DEF_FUNCTION_TYPE_3 (BT_FN_VOID_APTR_I16_INT, BT_VOID, BT_ATOMIC_PTR, BT_I16, BT_INT)
+ 
  
  DEF_FUNCTION_TYPE_4 (BT_FN_SIZE_CONST_PTR_SIZE_SIZE_FILEPTR,
  		     BT_SIZE, BT_CONST_PTR, BT_SIZE, BT_SIZE, BT_FILEPTR)
*************** DEF_FUNCTION_TYPE_4 (BT_FN_VOID_OMPFN_PT
*** 444,453 ****
  		     BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT, BT_UINT)
  DEF_FUNCTION_TYPE_4 (BT_FN_VOID_PTR_WORD_WORD_PTR,
  		     BT_VOID, BT_PTR, BT_WORD, BT_WORD, BT_PTR)
! DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_VPTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_VOLATILE_PTR, BT_PTR, BT_INT)
! DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_CONST_VPTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_CONST_VOLATILE_PTR, BT_PTR, BT_INT)
  
  DEF_FUNCTION_TYPE_5 (BT_FN_INT_STRING_INT_SIZE_CONST_STRING_VALIST_ARG,
  		     BT_INT, BT_STRING, BT_INT, BT_SIZE, BT_CONST_STRING,
--- 474,483 ----
  		     BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT, BT_UINT)
  DEF_FUNCTION_TYPE_4 (BT_FN_VOID_PTR_WORD_WORD_PTR,
  		     BT_VOID, BT_PTR, BT_WORD, BT_WORD, BT_PTR)
! DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_APTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_ATOMIC_PTR, BT_PTR, BT_INT)
! DEF_FUNCTION_TYPE_4 (BT_FN_VOID_SIZE_CONST_APTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_CONST_ATOMIC_PTR, BT_PTR, BT_INT)
  
  DEF_FUNCTION_TYPE_5 (BT_FN_INT_STRING_INT_SIZE_CONST_STRING_VALIST_ARG,
  		     BT_INT, BT_STRING, BT_INT, BT_SIZE, BT_CONST_STRING,
*************** DEF_FUNCTION_TYPE_5 (BT_FN_INT_STRING_IN
*** 455,462 ****
  DEF_FUNCTION_TYPE_5 (BT_FN_BOOL_LONG_LONG_LONG_LONGPTR_LONGPTR,
  		     BT_BOOL, BT_LONG, BT_LONG, BT_LONG,
  		     BT_PTR_LONG, BT_PTR_LONG)
! DEF_FUNCTION_TYPE_5 (BT_FN_VOID_SIZE_VPTR_PTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_VOLATILE_PTR, BT_PTR, BT_PTR, BT_INT)
  DEF_FUNCTION_TYPE_5 (BT_FN_BOOL_VPTR_PTR_I1_INT_INT,
  		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I1, BT_INT, BT_INT)
  DEF_FUNCTION_TYPE_5 (BT_FN_BOOL_VPTR_PTR_I2_INT_INT,
--- 485,492 ----
  DEF_FUNCTION_TYPE_5 (BT_FN_BOOL_LONG_LONG_LONG_LONGPTR_LONGPTR,
  		     BT_BOOL, BT_LONG, BT_LONG, BT_LONG,
  		     BT_PTR_LONG, BT_PTR_LONG)
! DEF_FUNCTION_TYPE_5 (BT_FN_VOID_SIZE_APTR_PTR_PTR_INT, BT_VOID, BT_SIZE,
! 		     BT_ATOMIC_PTR, BT_PTR, BT_PTR, BT_INT)
  DEF_FUNCTION_TYPE_5 (BT_FN_BOOL_VPTR_PTR_I1_INT_INT,
  		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I1, BT_INT, BT_INT)
  DEF_FUNCTION_TYPE_5 (BT_FN_BOOL_VPTR_PTR_I2_INT_INT,
*************** DEF_FUNCTION_TYPE_6 (BT_FN_VOID_OMPFN_PT
*** 480,502 ****
  DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULLPTR_ULLPTR,
  		     BT_BOOL, BT_BOOL, BT_ULONGLONG, BT_ULONGLONG,
  		     BT_ULONGLONG, BT_PTR_ULONGLONG, BT_PTR_ULONGLONG)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I1_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I1, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I2_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I2, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I4_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I4, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I8_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I8, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_VPTR_PTR_I16_BOOL_INT_INT, 
! 		     BT_BOOL, BT_VOLATILE_PTR, BT_PTR, BT_I16, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_SIZE_VPTR_PTR_PTR_INT_INT, BT_BOOL, BT_SIZE,
! 		     BT_VOLATILE_PTR, BT_PTR, BT_PTR, BT_INT, BT_INT)
  
  
  DEF_FUNCTION_TYPE_7 (BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG,
--- 510,532 ----
  DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULLPTR_ULLPTR,
  		     BT_BOOL, BT_BOOL, BT_ULONGLONG, BT_ULONGLONG,
  		     BT_ULONGLONG, BT_PTR_ULONGLONG, BT_PTR_ULONGLONG)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I1_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I1, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I2_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I2, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I4_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I4, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I8_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I8, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_APTR_PTR_I16_BOOL_INT_INT, 
! 		     BT_BOOL, BT_ATOMIC_PTR, BT_PTR, BT_I16, BT_BOOL, BT_INT,
  		     BT_INT)
! DEF_FUNCTION_TYPE_6 (BT_FN_BOOL_SIZE_APTR_PTR_PTR_INT_INT, BT_BOOL, BT_SIZE,
! 		     BT_ATOMIC_PTR, BT_PTR, BT_PTR, BT_INT, BT_INT)
  
  
  DEF_FUNCTION_TYPE_7 (BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG,
Index: gcc/sync-builtins.def
===================================================================
*** gcc/sync-builtins.def	(revision 201248)
--- gcc/sync-builtins.def	(working copy)
*************** DEF_SYNC_BUILTIN (BUILT_IN_SYNC_SYNCHRON
*** 260,567 ****
  /* __sync* builtins for the C++ memory model.  */
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_TEST_AND_SET, "__atomic_test_and_set",
! 		  BT_FN_BOOL_VPTR_INT, ATTR_NOTHROW_LEAF_LIST)
  
! DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_CLEAR, "__atomic_clear", BT_FN_VOID_VPTR_INT,
  		  ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE,
  		  "__atomic_exchange",
! 		  BT_FN_VOID_SIZE_VPTR_PTR_PTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_N,
  		  "__atomic_exchange_n",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_1,
  		  "__atomic_exchange_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_2,
  		  "__atomic_exchange_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_4,
  		  "__atomic_exchange_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_8,
  		  "__atomic_exchange_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_16,
  		  "__atomic_exchange_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD,
  		  "__atomic_load",
! 		  BT_FN_VOID_SIZE_CONST_VPTR_PTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_N,
  		  "__atomic_load_n",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_1,
  		  "__atomic_load_1",
! 		  BT_FN_I1_CONST_VPTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_2,
  		  "__atomic_load_2",
! 		  BT_FN_I2_CONST_VPTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_4,
  		  "__atomic_load_4",
! 		  BT_FN_I4_CONST_VPTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_8,
  		  "__atomic_load_8",
! 		  BT_FN_I8_CONST_VPTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_16,
  		  "__atomic_load_16",
! 		  BT_FN_I16_CONST_VPTR_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE,
  		  "__atomic_compare_exchange",
! 		  BT_FN_BOOL_SIZE_VPTR_PTR_PTR_INT_INT,
  		  ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_N,
  		  "__atomic_compare_exchange_n",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_1,
  		  "__atomic_compare_exchange_1",
! 		  BT_FN_BOOL_VPTR_PTR_I1_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_2,
  		  "__atomic_compare_exchange_2",
! 		  BT_FN_BOOL_VPTR_PTR_I2_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_4,
  		  "__atomic_compare_exchange_4",
! 		  BT_FN_BOOL_VPTR_PTR_I4_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_8,
  		  "__atomic_compare_exchange_8",
! 		  BT_FN_BOOL_VPTR_PTR_I8_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_16,
  		  "__atomic_compare_exchange_16",
! 		  BT_FN_BOOL_VPTR_PTR_I16_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE,
  		  "__atomic_store",
! 		  BT_FN_VOID_SIZE_VPTR_PTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_N,
  		  "__atomic_store_n",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_1,
  		  "__atomic_store_1",
! 		  BT_FN_VOID_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_2,
  		  "__atomic_store_2",
! 		  BT_FN_VOID_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_4,
  		  "__atomic_store_4",
! 		  BT_FN_VOID_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_8,
  		  "__atomic_store_8",
! 		  BT_FN_VOID_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_16,
  		  "__atomic_store_16",
! 		  BT_FN_VOID_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_N,
  		  "__atomic_add_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_1,
  		  "__atomic_add_fetch_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_2,
  		  "__atomic_add_fetch_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_4,
  		  "__atomic_add_fetch_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_8,
  		  "__atomic_add_fetch_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_16,
  		  "__atomic_add_fetch_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_N,
  		  "__atomic_sub_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_1,
  		  "__atomic_sub_fetch_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_2,
  		  "__atomic_sub_fetch_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_4,
  		  "__atomic_sub_fetch_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_8,
  		  "__atomic_sub_fetch_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_16,
  		  "__atomic_sub_fetch_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_N,
  		  "__atomic_and_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_1,
  		  "__atomic_and_fetch_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_2,
  		  "__atomic_and_fetch_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_4,
  		  "__atomic_and_fetch_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_8,
  		  "__atomic_and_fetch_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_16,
  		  "__atomic_and_fetch_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_N,
  		  "__atomic_nand_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_1,
  		  "__atomic_nand_fetch_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_2,
  		  "__atomic_nand_fetch_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_4,
  		  "__atomic_nand_fetch_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_8,
  		  "__atomic_nand_fetch_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_16,
  		  "__atomic_nand_fetch_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_N,
  		  "__atomic_xor_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_1,
  		  "__atomic_xor_fetch_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_2,
  		  "__atomic_xor_fetch_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_4,
  		  "__atomic_xor_fetch_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_8,
  		  "__atomic_xor_fetch_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_16,
  		  "__atomic_xor_fetch_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_N,
  		  "__atomic_or_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_1,
  		  "__atomic_or_fetch_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_2,
  		  "__atomic_or_fetch_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_4,
  		  "__atomic_or_fetch_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_8,
  		  "__atomic_or_fetch_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_16,
  		  "__atomic_or_fetch_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_N,
  		  "__atomic_fetch_add",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_1,
  		  "__atomic_fetch_add_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_2,
  		  "__atomic_fetch_add_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_4,
  		  "__atomic_fetch_add_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_8,
  		  "__atomic_fetch_add_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_16,
  		  "__atomic_fetch_add_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_N,
  		  "__atomic_fetch_sub",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_1,
  		  "__atomic_fetch_sub_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_2,
  		  "__atomic_fetch_sub_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_4,
  		  "__atomic_fetch_sub_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_8,
  		  "__atomic_fetch_sub_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_16,
  		  "__atomic_fetch_sub_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_N,
  		  "__atomic_fetch_and",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_1,
  		  "__atomic_fetch_and_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_2,
  		  "__atomic_fetch_and_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_4,
  		  "__atomic_fetch_and_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_8,
  		  "__atomic_fetch_and_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_16,
  		  "__atomic_fetch_and_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_N,
  		  "__atomic_fetch_nand",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_1,
  		  "__atomic_fetch_nand_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_2,
  		  "__atomic_fetch_nand_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_4,
  		  "__atomic_fetch_nand_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_8,
  		  "__atomic_fetch_nand_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_16,
  		  "__atomic_fetch_nand_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_N,
  		  "__atomic_fetch_xor",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_1,
  		  "__atomic_fetch_xor_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_2,
  		  "__atomic_fetch_xor_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_4,
  		  "__atomic_fetch_xor_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_8,
  		  "__atomic_fetch_xor_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_16,
  		  "__atomic_fetch_xor_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_N,
--- 260,567 ----
  /* __sync* builtins for the C++ memory model.  */
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_TEST_AND_SET, "__atomic_test_and_set",
! 		  BT_FN_BOOL_APTR_INT, ATTR_NOTHROW_LEAF_LIST)
  
! DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_CLEAR, "__atomic_clear", BT_FN_VOID_APTR_INT,
  		  ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE,
  		  "__atomic_exchange",
! 		  BT_FN_VOID_SIZE_APTR_PTR_PTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_N,
  		  "__atomic_exchange_n",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_1,
  		  "__atomic_exchange_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_2,
  		  "__atomic_exchange_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_4,
  		  "__atomic_exchange_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_8,
  		  "__atomic_exchange_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_EXCHANGE_16,
  		  "__atomic_exchange_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD,
  		  "__atomic_load",
! 		  BT_FN_VOID_SIZE_CONST_APTR_PTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_N,
  		  "__atomic_load_n",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_1,
  		  "__atomic_load_1",
! 		  BT_FN_I1_CONST_APTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_2,
  		  "__atomic_load_2",
! 		  BT_FN_I2_CONST_APTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_4,
  		  "__atomic_load_4",
! 		  BT_FN_I4_CONST_APTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_8,
  		  "__atomic_load_8",
! 		  BT_FN_I8_CONST_APTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_LOAD_16,
  		  "__atomic_load_16",
! 		  BT_FN_I16_CONST_APTR_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE,
  		  "__atomic_compare_exchange",
! 		  BT_FN_BOOL_SIZE_APTR_PTR_PTR_INT_INT,
  		  ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_N,
  		  "__atomic_compare_exchange_n",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_1,
  		  "__atomic_compare_exchange_1",
! 		  BT_FN_BOOL_APTR_PTR_I1_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_2,
  		  "__atomic_compare_exchange_2",
! 		  BT_FN_BOOL_APTR_PTR_I2_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_4,
  		  "__atomic_compare_exchange_4",
! 		  BT_FN_BOOL_APTR_PTR_I4_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_8,
  		  "__atomic_compare_exchange_8",
! 		  BT_FN_BOOL_APTR_PTR_I8_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_COMPARE_EXCHANGE_16,
  		  "__atomic_compare_exchange_16",
! 		  BT_FN_BOOL_APTR_PTR_I16_BOOL_INT_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE,
  		  "__atomic_store",
! 		  BT_FN_VOID_SIZE_APTR_PTR_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_N,
  		  "__atomic_store_n",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_1,
  		  "__atomic_store_1",
! 		  BT_FN_VOID_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_2,
  		  "__atomic_store_2",
! 		  BT_FN_VOID_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_4,
  		  "__atomic_store_4",
! 		  BT_FN_VOID_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_8,
  		  "__atomic_store_8",
! 		  BT_FN_VOID_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_STORE_16,
  		  "__atomic_store_16",
! 		  BT_FN_VOID_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_N,
  		  "__atomic_add_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_1,
  		  "__atomic_add_fetch_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_2,
  		  "__atomic_add_fetch_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_4,
  		  "__atomic_add_fetch_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_8,
  		  "__atomic_add_fetch_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ADD_FETCH_16,
  		  "__atomic_add_fetch_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_N,
  		  "__atomic_sub_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_1,
  		  "__atomic_sub_fetch_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_2,
  		  "__atomic_sub_fetch_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_4,
  		  "__atomic_sub_fetch_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_8,
  		  "__atomic_sub_fetch_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_SUB_FETCH_16,
  		  "__atomic_sub_fetch_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_N,
  		  "__atomic_and_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_1,
  		  "__atomic_and_fetch_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_2,
  		  "__atomic_and_fetch_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_4,
  		  "__atomic_and_fetch_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_8,
  		  "__atomic_and_fetch_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_AND_FETCH_16,
  		  "__atomic_and_fetch_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_N,
  		  "__atomic_nand_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_1,
  		  "__atomic_nand_fetch_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_2,
  		  "__atomic_nand_fetch_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_4,
  		  "__atomic_nand_fetch_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_8,
  		  "__atomic_nand_fetch_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_NAND_FETCH_16,
  		  "__atomic_nand_fetch_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_N,
  		  "__atomic_xor_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_1,
  		  "__atomic_xor_fetch_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_2,
  		  "__atomic_xor_fetch_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_4,
  		  "__atomic_xor_fetch_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_8,
  		  "__atomic_xor_fetch_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_XOR_FETCH_16,
  		  "__atomic_xor_fetch_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_N,
  		  "__atomic_or_fetch",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_1,
  		  "__atomic_or_fetch_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_2,
  		  "__atomic_or_fetch_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_4,
  		  "__atomic_or_fetch_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_8,
  		  "__atomic_or_fetch_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_OR_FETCH_16,
  		  "__atomic_or_fetch_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_N,
  		  "__atomic_fetch_add",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_1,
  		  "__atomic_fetch_add_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_2,
  		  "__atomic_fetch_add_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_4,
  		  "__atomic_fetch_add_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_8,
  		  "__atomic_fetch_add_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_ADD_16,
  		  "__atomic_fetch_add_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_N,
  		  "__atomic_fetch_sub",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_1,
  		  "__atomic_fetch_sub_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_2,
  		  "__atomic_fetch_sub_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_4,
  		  "__atomic_fetch_sub_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_8,
  		  "__atomic_fetch_sub_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_SUB_16,
  		  "__atomic_fetch_sub_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_N,
  		  "__atomic_fetch_and",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_1,
  		  "__atomic_fetch_and_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_2,
  		  "__atomic_fetch_and_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_4,
  		  "__atomic_fetch_and_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_8,
  		  "__atomic_fetch_and_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_AND_16,
  		  "__atomic_fetch_and_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_N,
  		  "__atomic_fetch_nand",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_1,
  		  "__atomic_fetch_nand_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_2,
  		  "__atomic_fetch_nand_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_4,
  		  "__atomic_fetch_nand_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_8,
  		  "__atomic_fetch_nand_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_NAND_16,
  		  "__atomic_fetch_nand_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_N,
  		  "__atomic_fetch_xor",
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_1,
  		  "__atomic_fetch_xor_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_2,
  		  "__atomic_fetch_xor_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_4,
  		  "__atomic_fetch_xor_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_8,
  		  "__atomic_fetch_xor_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_XOR_16,
  		  "__atomic_fetch_xor_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_N,
*************** DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_
*** 569,595 ****
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_1,
  		  "__atomic_fetch_or_1",
! 		  BT_FN_I1_VPTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_2,
  		  "__atomic_fetch_or_2",
! 		  BT_FN_I2_VPTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_4,
  		  "__atomic_fetch_or_4",
! 		  BT_FN_I4_VPTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_8,
  		  "__atomic_fetch_or_8",
! 		  BT_FN_I8_VPTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_16,
  		  "__atomic_fetch_or_16",
! 		  BT_FN_I16_VPTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ALWAYS_LOCK_FREE,
  		  "__atomic_always_lock_free",
! 		  BT_FN_BOOL_SIZE_CONST_VPTR, ATTR_CONST_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_IS_LOCK_FREE,
  		  "__atomic_is_lock_free",
! 		  BT_FN_BOOL_SIZE_CONST_VPTR, ATTR_CONST_NOTHROW_LEAF_LIST)
  
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_THREAD_FENCE,
--- 569,595 ----
  		  BT_FN_VOID_VAR, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_1,
  		  "__atomic_fetch_or_1",
! 		  BT_FN_I1_APTR_I1_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_2,
  		  "__atomic_fetch_or_2",
! 		  BT_FN_I2_APTR_I2_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_4,
  		  "__atomic_fetch_or_4",
! 		  BT_FN_I4_APTR_I4_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_8,
  		  "__atomic_fetch_or_8",
! 		  BT_FN_I8_APTR_I8_INT, ATTR_NOTHROW_LEAF_LIST)
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_FETCH_OR_16,
  		  "__atomic_fetch_or_16",
! 		  BT_FN_I16_APTR_I16_INT, ATTR_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_ALWAYS_LOCK_FREE,
  		  "__atomic_always_lock_free",
! 		  BT_FN_BOOL_SIZE_CONST_APTR, ATTR_CONST_NOTHROW_LEAF_LIST)
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_IS_LOCK_FREE,
  		  "__atomic_is_lock_free",
! 		  BT_FN_BOOL_SIZE_CONST_APTR, ATTR_CONST_NOTHROW_LEAF_LIST)
  
  
  DEF_SYNC_BUILTIN (BUILT_IN_ATOMIC_THREAD_FENCE,
Index: gcc/doc/generic.texi
===================================================================
*** gcc/doc/generic.texi	(revision 201248)
--- gcc/doc/generic.texi	(working copy)
*************** This macro holds if the type is @code{co
*** 2547,2552 ****
--- 2547,2555 ----
  @item CP_TYPE_VOLATILE_P
  This macro holds if the type is @code{volatile}-qualified.
  
+ @item CP_TYPE_ATOMIC_P
+ This macro holds if the type is @code{atomic}-qualified.
+ 
  @item CP_TYPE_RESTRICT_P
  This macro holds if the type is @code{restrict}-qualified.
  
Index: gcc/doc/tm.texi
===================================================================
*** gcc/doc/tm.texi	(revision 201248)
--- gcc/doc/tm.texi	(working copy)
*************** It returns true if the target supports G
*** 11375,11377 ****
--- 11375,11381 ----
  The support includes the assembler, linker and dynamic linker.
  The default value of this hook is based on target's libc.
  @end deftypefn
+ 
+ @deftypefn {Target Hook} {unsigned int} TARGET_ATOMIC_ALIGN_FOR_MODE (enum machine_mode @var{mode})
+ If defined, this function returns an appropriate alignment in bits for an atomic object of machine_mode @var{mode}.  If 0 is returned then the default alignment for the specified mode is used. 
+ @end deftypefn
Index: gcc/doc/tm.texi.in
===================================================================
*** gcc/doc/tm.texi.in	(revision 201248)
--- gcc/doc/tm.texi.in	(working copy)
*************** and the associated definitions of those
*** 8415,8417 ****
--- 8415,8419 ----
  @hook TARGET_ATOMIC_TEST_AND_SET_TRUEVAL
  
  @hook TARGET_HAS_IFUNC_P
+ 
+ @hook TARGET_ATOMIC_ALIGN_FOR_MODE
Index: gcc/testsuite/gcc.dg/atomic-exchange-1.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-exchange-1.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-exchange-1.c	(working copy)
***************
*** 7,13 ****
  
  extern void abort(void);
  
! char v, count, ret;
  
  main ()
  {
--- 7,14 ----
  
  extern void abort(void);
  
! char __attribute__ ((atomic)) v;
! char count, ret;
  
  main ()
  {
Index: gcc/testsuite/gcc.dg/atomic-exchange-2.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-exchange-2.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-exchange-2.c	(working copy)
***************
*** 7,13 ****
  
  extern void abort(void);
  
! short v, count, ret;
  
  main ()
  {
--- 7,14 ----
  
  extern void abort(void);
  
! short __attribute__ ((atomic)) v;
! short count, ret;
  
  main ()
  {
Index: gcc/testsuite/gcc.dg/atomic-exchange-3.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-exchange-3.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-exchange-3.c	(working copy)
***************
*** 7,13 ****
  
  extern void abort(void);
  
! int v, count, ret;
  
  main ()
  {
--- 7,14 ----
  
  extern void abort(void);
  
! int __attribute__ ((atomic)) v;
! int count, ret;
  
  main ()
  {
Index: gcc/testsuite/gcc.dg/atomic-exchange-4.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-exchange-4.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-exchange-4.c	(working copy)
***************
*** 9,15 ****
  
  extern void abort(void);
  
! long long v, count, ret;
  
  main ()
  {
--- 9,16 ----
  
  extern void abort(void);
  
! long long __attribute__ ((atomic)) v;
! long long count, ret;
  
  main ()
  {
Index: gcc/testsuite/gcc.dg/atomic-exchange-5.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-exchange-5.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-exchange-5.c	(working copy)
***************
*** 8,14 ****
  
  extern void abort(void);
  
! __int128_t v, count, ret;
  
  main ()
  {
--- 8,15 ----
  
  extern void abort(void);
  
! __int128_t __attribute__ ((atomic)) v;
! __int128_t count, ret;
  
  main ()
  {
Index: gcc/testsuite/gcc.dg/atomic-op-1.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-op-1.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-op-1.c	(working copy)
***************
*** 1,13 ****
  /* Test __atomic routines for existence and proper execution on 1 byte 
     values with each valid memory model.  */
  /* { dg-do run } */
  /* { dg-require-effective-target sync_char_short } */
  
  /* Test the execution of the __atomic_*OP builtin routines for a char.  */
  
  extern void abort(void);
  
! char v, count, res;
  const char init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
--- 1,15 ----
  /* Test __atomic routines for existence and proper execution on 1 byte 
     values with each valid memory model.  */
  /* { dg-do run } */
+ /* { dg-options "--std=c11" } */
  /* { dg-require-effective-target sync_char_short } */
  
  /* Test the execution of the __atomic_*OP builtin routines for a char.  */
  
  extern void abort(void);
  
! _Atomic char v;
! char count, res;
  const char init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
*************** test_or ()
*** 527,532 ****
--- 529,535 ----
      abort ();
  }
  
+ int
  main ()
  {
    test_fetch_add ();
Index: gcc/testsuite/gcc.dg/atomic-op-2.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-op-2.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-op-2.c	(working copy)
***************
*** 1,6 ****
--- 1,7 ----
  /* Test __atomic routines for existence and proper execution on 2 byte 
     values with each valid memory model.  */
  /* { dg-do run } */
+ /* { dg-options "--std=c11" } */
  /* { dg-require-effective-target sync_char_short } */
  
  
***************
*** 8,14 ****
  
  extern void abort(void);
  
! short v, count, res;
  const short init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
--- 9,16 ----
  
  extern void abort(void);
  
! _Atomic short v;
! short count, res;
  const short init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
*************** test_or ()
*** 528,533 ****
--- 530,536 ----
      abort ();
  }
  
+ int
  main ()
  {
    test_fetch_add ();
Index: gcc/testsuite/gcc.dg/atomic-op-3.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-op-3.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-op-3.c	(working copy)
***************
*** 1,13 ****
  /* Test __atomic routines for existence and proper execution on 4 byte 
     values with each valid memory model.  */
  /* { dg-do run } */
  /* { dg-require-effective-target sync_int_long } */
  
  /* Test the execution of the __atomic_*OP builtin routines for an int.  */
  
  extern void abort(void);
  
! int v, count, res;
  const int init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
--- 1,15 ----
  /* Test __atomic routines for existence and proper execution on 4 byte 
     values with each valid memory model.  */
  /* { dg-do run } */
+ /* { dg-options "--std=c11" } */
  /* { dg-require-effective-target sync_int_long } */
  
  /* Test the execution of the __atomic_*OP builtin routines for an int.  */
  
  extern void abort(void);
  
! _Atomic int v;
! int count, res;
  const int init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
*************** test_or ()
*** 527,532 ****
--- 529,535 ----
      abort ();
  }
  
+ int
  main ()
  {
    test_fetch_add ();
Index: gcc/testsuite/gcc.dg/atomic-op-4.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-op-4.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-op-4.c	(working copy)
***************
*** 2,15 ****
     values with each valid memory model.  */
  /* { dg-do run } */
  /* { dg-require-effective-target sync_long_long_runtime } */
! /* { dg-options "" } */
! /* { dg-options "-march=pentium" { target { { i?86-*-* x86_64-*-* } && ia32 } } } */
  
  /* Test the execution of the __atomic_*OP builtin routines for long long.  */
  
  extern void abort(void);
  
! long long v, count, res;
  const long long init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
--- 2,16 ----
     values with each valid memory model.  */
  /* { dg-do run } */
  /* { dg-require-effective-target sync_long_long_runtime } */
! /* { dg-options "--std=c11" } */
! /* { dg-options "--std=c11 -march=pentium" { target { { i?86-*-* x86_64-*-* } && ia32 } } } */
  
  /* Test the execution of the __atomic_*OP builtin routines for long long.  */
  
  extern void abort(void);
  
! _Atomic long long v;
! long long count, res;
  const long long init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
*************** test_or ()
*** 529,534 ****
--- 530,536 ----
      abort ();
  }
  
+ int
  main ()
  {
    test_fetch_add ();
Index: gcc/testsuite/gcc.dg/atomic-op-5.c
===================================================================
*** gcc/testsuite/gcc.dg/atomic-op-5.c	(revision 201248)
--- gcc/testsuite/gcc.dg/atomic-op-5.c	(working copy)
***************
*** 1,14 ****
  /* Test __atomic routines for existence and proper execution on 16 byte 
     values with each valid memory model.  */
  /* { dg-do run } */
  /* { dg-require-effective-target sync_int_128_runtime } */
! /* { dg-options "-mcx16" { target { i?86-*-* x86_64-*-* } } } */
  
  /* Test the execution of the __atomic_*OP builtin routines for an int_128.  */
  
  extern void abort(void);
  
! __int128_t v, count, res;
  const __int128_t init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
--- 1,16 ----
  /* Test __atomic routines for existence and proper execution on 16 byte 
     values with each valid memory model.  */
  /* { dg-do run } */
+ /* { dg-options "--std=c11" } */
  /* { dg-require-effective-target sync_int_128_runtime } */
! /* { dg-options "--std=c11 -mcx16" { target { i?86-*-* x86_64-*-* } } } */
  
  /* Test the execution of the __atomic_*OP builtin routines for an int_128.  */
  
  extern void abort(void);
  
! _Atomic __int128_t v;
! __int128_t count, res;
  const __int128_t init = ~0;
  
  /* The fetch_op routines return the original value before the operation.  */
*************** test_or ()
*** 528,533 ****
--- 530,536 ----
      abort ();
  }
  
+ int
  main ()
  {
    test_fetch_add ();