namespace std {
template<class T> struct atomic_ref {
private:
T* ptr; // *exposition only*
public:
using value_type = T;
static constexpr size_t required_alignment = *implementation-defined*;
static constexpr bool is_always_lock_free = *implementation-defined*;
bool is_lock_free() const noexcept;
explicit atomic_ref(T&);
atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
void store(T, memory_order = memory_order::seq_cst) const noexcept;
T operator=(T) const noexcept;
T load(memory_order = memory_order::seq_cst) const noexcept;
operator T() const noexcept;
T exchange(T, memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_weak(T&, T,
memory_order, memory_order) const noexcept;
bool compare_exchange_strong(T&, T,
memory_order, memory_order) const noexcept;
bool compare_exchange_weak(T&, T,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(T&, T,
memory_order = memory_order::seq_cst) const noexcept;
void wait(T, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() const noexcept;
void notify_all() const noexcept;
};
}

An atomic_ref object applies atomic operations ([atomics.general]) to
the object referenced by *ptr such that,
for the lifetime ([basic.life]) of the atomic_ref object,
the object referenced by *ptr is an atomic object ([intro.races]).

The lifetime ([basic.life]) of an object referenced by *ptr
shall exceed the lifetime of all atomic_refs that reference the object.

While any atomic_ref instances exist
that reference the *ptr object,
all accesses to that object shall exclusively occur
through those atomic_ref instances.

No subobject of the object referenced by atomic_ref
shall be concurrently referenced by any other atomic_ref object.

Atomic operations applied to an object
through a referencing atomic_ref are atomic with respect to
atomic operations applied through any other atomic_ref
referencing the same object.

```
static constexpr size_t required_alignment;
```

[*Note 1*: *end note*]

Hardware could require an object
referenced by an atomic_ref
to have stricter alignment ([basic.align])
than other objects of type T.

Further, whether operations on an atomic_ref
are lock-free could depend on the alignment of the referenced object.

For example, lock-free operations on std::complex<double>
could be supported only if aligned to 2*alignof(double).

â€” ```
static constexpr bool is_always_lock_free;
```

```
bool is_lock_free() const noexcept;
```

```
atomic_ref(T& obj);
```

```
atomic_ref(const atomic_ref& ref) noexcept;
```

```
void store(T desired, memory_order order = memory_order::seq_cst) const noexcept;
```

```
T operator=(T desired) const noexcept;
```

```
T load(memory_order order = memory_order::seq_cst) const noexcept;
```

```
operator T() const noexcept;
```

```
T exchange(T desired, memory_order order = memory_order::seq_cst) const noexcept;
```

Memory is affected according to the value of order.

This operation is an atomic read-modify-write operation ([intro.multithread]).

```
bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) const noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) const noexcept;
bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) const noexcept;
```

It then atomically compares the value representation of
the value referenced by *ptr for equality
with that previously retrieved from expected,
and if true, replaces the value referenced by *ptr
with that in desired.

If and only if the comparison is true,
memory is affected according to the value of success, and
if the comparison is false,
memory is affected according to the value of failure.

When only one memory_order argument is supplied,
the value of success is order, and
the value of failure is order
except that a value of memory_order::acq_rel shall be replaced by
the value memory_order::acquire and
a value of memory_order::release shall be replaced by
the value memory_order::relaxed.

If and only if the comparison is false then,
after the atomic operation,
the value in expected is replaced by
the value read from the value referenced by *ptr
during the atomic comparison.

If the operation returns true,
these operations are atomic read-modify-write operations ([intro.races])
on the value referenced by *ptr.

Otherwise, these operations are atomic load operations on that memory.

That is, even when the contents of memory referred to
by expected and ptr are equal,
it may return false and
store back to expected the same memory contents
that were originally there.

[*Note 2*: *end note*]

This spurious failure enables implementation of compare-and-exchange
on a broader class of machines, e.g., load-locked store-conditional machines.

A consequence of spurious failure is
that nearly all uses of weak compare-and-exchange will be in a loop.

When a compare-and-exchange is in a loop,
the weak version will yield better performance on some platforms.

When a weak compare-and-exchange would require a loop and
a strong one would not, the strong one is preferable.

â€” ```
void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
```

```
void notify_one() const noexcept;
```

```
void notify_all() const noexcept;
```

There are specializations of the atomic_ref class template
for the integral types
char,
signed char,
unsigned char,
short,
unsigned short,
int,
unsigned int,
long,
unsigned long,
long long,
unsigned long long,
char8_t,
char16_t,
char32_t,
wchar_t,
and any other types needed by the typedefs in the header <cstdint>.

For each such type *integral-type*,
the specialization atomic_ref<*integral-type*> provides
additional atomic operations appropriate to integral types.

namespace std {
template<> struct atomic_ref<*integral-type*> {
private:
*integral-type** ptr; // *exposition only*
public:
using value_type = *integral-type*;
using difference_type = value_type;
static constexpr size_t required_alignment = *implementation-defined*;
static constexpr bool is_always_lock_free = *implementation-defined*;
bool is_lock_free() const noexcept;
explicit atomic_ref(*integral-type*&);
atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
void store(*integral-type*, memory_order = memory_order::seq_cst) const noexcept;
*integral-type* operator=(*integral-type*) const noexcept;
*integral-type* load(memory_order = memory_order::seq_cst) const noexcept;
operator *integral-type*() const noexcept;
*integral-type* exchange(*integral-type*,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_weak(*integral-type*&, *integral-type*,
memory_order, memory_order) const noexcept;
bool compare_exchange_strong(*integral-type*&, *integral-type*,
memory_order, memory_order) const noexcept;
bool compare_exchange_weak(*integral-type*&, *integral-type*,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(*integral-type*&, *integral-type*,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type* fetch_add(*integral-type*,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type* fetch_sub(*integral-type*,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type* fetch_and(*integral-type*,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type* fetch_or(*integral-type*,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type* fetch_xor(*integral-type*,
memory_order = memory_order::seq_cst) const noexcept;
*integral-type* operator++(int) const noexcept;
*integral-type* operator--(int) const noexcept;
*integral-type* operator++() const noexcept;
*integral-type* operator--() const noexcept;
*integral-type* operator+=(*integral-type*) const noexcept;
*integral-type* operator-=(*integral-type*) const noexcept;
*integral-type* operator&=(*integral-type*) const noexcept;
*integral-type* operator|=(*integral-type*) const noexcept;
*integral-type* operator^=(*integral-type*) const noexcept;
void wait(*integral-type*, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() const noexcept;
void notify_all() const noexcept;
};
}

*integral-type* fetch_*key*(*integral-type* operand,
memory_order order = memory_order::seq_cst) const noexcept;

Memory is affected according to the value of order.

These operations are atomic read-modify-write operations ([intro.races]).

*integral-type* operator *op*=(*integral-type* operand) const noexcept;

There are specializations of the atomic_ref class template
for all cv-unqualified floating-point types.

For each such type *floating-point-type*,
the specialization atomic_ref<*floating-point*> provides
additional atomic operations appropriate to floating-point types.

namespace std {
template<> struct atomic_ref<*floating-point-type*> {
private:
*floating-point-type** ptr; // *exposition only*
public:
using value_type = *floating-point-type*;
using difference_type = value_type;
static constexpr size_t required_alignment = *implementation-defined*;
static constexpr bool is_always_lock_free = *implementation-defined*;
bool is_lock_free() const noexcept;
explicit atomic_ref(*floating-point-type*&);
atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
void store(*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept;
*floating-point-type* operator=(*floating-point-type*) const noexcept;
*floating-point-type* load(memory_order = memory_order::seq_cst) const noexcept;
operator *floating-point-type*() const noexcept;
*floating-point-type* exchange(*floating-point-type*,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_weak(*floating-point-type*&, *floating-point-type*,
memory_order, memory_order) const noexcept;
bool compare_exchange_strong(*floating-point-type*&, *floating-point-type*,
memory_order, memory_order) const noexcept;
bool compare_exchange_weak(*floating-point-type*&, *floating-point-type*,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(*floating-point-type*&, *floating-point-type*,
memory_order = memory_order::seq_cst) const noexcept;
*floating-point-type* fetch_add(*floating-point-type*,
memory_order = memory_order::seq_cst) const noexcept;
*floating-point-type* fetch_sub(*floating-point-type*,
memory_order = memory_order::seq_cst) const noexcept;
*floating-point-type* operator+=(*floating-point-type*) const noexcept;
*floating-point-type* operator-=(*floating-point-type*) const noexcept;
void wait(*floating-point-type*, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() const noexcept;
void notify_all() const noexcept;
};
}

*floating-point-type* fetch_*key*(*floating-point-type* operand,
memory_order order = memory_order::seq_cst) const noexcept;

Memory is affected according to the value of order.

These operations are atomic read-modify-write operations ([intro.races]).

Atomic arithmetic operations on *floating-point-type* should conform to
the std::numeric_limits<*floating-point-type*> traits
associated with the floating-point type ([limits.syn]).

*floating-point-type* operator *op*=(*floating-point-type* operand) const noexcept;

namespace std {
template<class T> struct atomic_ref<T*> {
private:
T** ptr; // *exposition only*
public:
using value_type = T*;
using difference_type = ptrdiff_t;
static constexpr size_t required_alignment = *implementation-defined*;
static constexpr bool is_always_lock_free = *implementation-defined*;
bool is_lock_free() const noexcept;
explicit atomic_ref(T*&);
atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete;
void store(T*, memory_order = memory_order::seq_cst) const noexcept;
T* operator=(T*) const noexcept;
T* load(memory_order = memory_order::seq_cst) const noexcept;
operator T*() const noexcept;
T* exchange(T*, memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_weak(T*&, T*,
memory_order, memory_order) const noexcept;
bool compare_exchange_strong(T*&, T*,
memory_order, memory_order) const noexcept;
bool compare_exchange_weak(T*&, T*,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(T*&, T*,
memory_order = memory_order::seq_cst) const noexcept;
T* fetch_add(difference_type, memory_order = memory_order::seq_cst) const noexcept;
T* fetch_sub(difference_type, memory_order = memory_order::seq_cst) const noexcept;
T* operator++(int) const noexcept;
T* operator--(int) const noexcept;
T* operator++() const noexcept;
T* operator--() const noexcept;
T* operator+=(difference_type) const noexcept;
T* operator-=(difference_type) const noexcept;
void wait(T*, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() const noexcept;
void notify_all() const noexcept;
};
}

`T* fetch_`*key*(difference_type operand, memory_order order = memory_order::seq_cst) const noexcept;

Memory is affected according to the value of order.

These operations are atomic read-modify-write operations ([intro.races]).

`T* operator `*op*=(difference_type operand) const noexcept;

```
value_type operator++(int) const noexcept;
```

```
value_type operator--(int) const noexcept;
```

```
value_type operator++() const noexcept;
```

```
value_type operator--() const noexcept;
```