namespace std { template<class T> struct atomic_ref { private: T* ptr; // exposition only public: using value_type = T; static constexpr size_t required_alignment = implementation-defined; static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept; explicit atomic_ref(T&); atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete; void store(T, memory_order = memory_order::seq_cst) const noexcept; T operator=(T) const noexcept; T load(memory_order = memory_order::seq_cst) const noexcept; operator T() const noexcept; T exchange(T, memory_order = memory_order::seq_cst) const noexcept; bool compare_exchange_weak(T&, T, memory_order, memory_order) const noexcept; bool compare_exchange_strong(T&, T, memory_order, memory_order) const noexcept; bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) const noexcept; bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) const noexcept; void wait(T, memory_order = memory_order::seq_cst) const noexcept; void notify_one() const noexcept; void notify_all() const noexcept; }; }

An atomic_ref object applies atomic operations ([atomics.general]) to
the object referenced by *ptr such that,
for the lifetime ([basic.life]) of the atomic_ref object,
the object referenced by *ptr is an atomic object ([intro.races]).

The lifetime ([basic.life]) of an object referenced by *ptr
shall exceed the lifetime of all atomic_refs that reference the object.

While any atomic_ref instances exist
that reference the *ptr object,
all accesses to that object shall exclusively occur
through those atomic_ref instances.

No subobject of the object referenced by atomic_ref
shall be concurrently referenced by any other atomic_ref object.

Atomic operations applied to an object
through a referencing atomic_ref are atomic with respect to
atomic operations applied through any other atomic_ref
referencing the same object.

```
static constexpr size_t required_alignment;
```

[Note: *end note*]

Hardware could require an object
referenced by an atomic_ref
to have stricter alignment ([basic.align])
than other objects of type T.

Further, whether operations on an atomic_ref
are lock-free could depend on the alignment of the referenced object.

For example, lock-free operations on std::complex<double>
could be supported only if aligned to 2*alignof(double).

— ```
static constexpr bool is_always_lock_free;
```

```
bool is_lock_free() const noexcept;
```

```
atomic_ref(T& obj);
```

```
atomic_ref(const atomic_ref& ref) noexcept;
```

```
void store(T desired, memory_order order = memory_order::seq_cst) const noexcept;
```

```
T operator=(T desired) const noexcept;
```

```
T load(memory_order order = memory_order::seq_cst) const noexcept;
```

```
operator T() const noexcept;
```

```
T exchange(T desired, memory_order order = memory_order::seq_cst) const noexcept;
```

Memory is affected according to the value of order.

This operation is an atomic read-modify-write operation ([intro.multithread]).

```
bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) const noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) const noexcept;
bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) const noexcept;
```

It then atomically compares the value representation of
the value referenced by *ptr for equality
with that previously retrieved from expected,
and if true, replaces the value referenced by *ptr
with that in desired.

If and only if the comparison is true,
memory is affected according to the value of success, and
if the comparison is false,
memory is affected according to the value of failure.

When only one memory_order argument is supplied,
the value of success is order, and
the value of failure is order
except that a value of memory_order::acq_rel shall be replaced by
the value memory_order::acquire and
a value of memory_order::release shall be replaced by
the value memory_order::relaxed.

If and only if the comparison is false then,
after the atomic operation,
the value in expected is replaced by
the value read from the value referenced by *ptr
during the atomic comparison.

If the operation returns true,
these operations are atomic read-modify-write operations ([intro.races])
on the value referenced by *ptr.

Otherwise, these operations are atomic load operations on that memory.

Remarks: A weak compare-and-exchange operation may fail spuriously.

That is, even when the contents of memory referred to
by expected and ptr are equal,
it may return false and
store back to expected the same memory contents
that were originally there.

[Note: *end note*]

This spurious failure enables implementation of compare-and-exchange
on a broader class of machines, e.g., load-locked store-conditional machines.

A consequence of spurious failure is
that nearly all uses of weak compare-and-exchange will be in a loop.

When a compare-and-exchange is in a loop,
the weak version will yield better performance on some platforms.

When a weak compare-and-exchange would require a loop and
a strong one would not, the strong one is preferable.

— ```
void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
```

```
void notify_one() const noexcept;
```

Effects: Unblocks the execution of at least one atomic waiting operation on *ptr
that is eligible to be unblocked ([atomics.wait]) by this call,
if any such atomic waiting operations exist.

```
void notify_all() const noexcept;
```

Effects: Unblocks the execution of all atomic waiting operations on *ptr
that are eligible to be unblocked ([atomics.wait]) by this call.

There are specializations of the atomic_ref class template
for the integral types
char,
signed char,
unsigned char,
short,
unsigned short,
int,
unsigned int,
long,
unsigned long,
long long,
unsigned long long,
char8_t,
char16_t,
char32_t,
wchar_t,
and any other types needed by the typedefs in the header <cstdint> ([cstdint.syn]).

For each such type integral,
the specialization atomic_ref<integral> provides
additional atomic operations appropriate to integral types.

namespace std { template<> struct atomic_ref<integral> { private: integral* ptr; // exposition only public: using value_type = integral; using difference_type = value_type; static constexpr size_t required_alignment = implementation-defined; static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept; explicit atomic_ref(integral&); atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete; void store(integral, memory_order = memory_order::seq_cst) const noexcept; integral operator=(integral) const noexcept; integral load(memory_order = memory_order::seq_cst) const noexcept; operator integral() const noexcept; integral exchange(integral, memory_order = memory_order::seq_cst) const noexcept; bool compare_exchange_weak(integral&, integral, memory_order, memory_order) const noexcept; bool compare_exchange_strong(integral&, integral, memory_order, memory_order) const noexcept; bool compare_exchange_weak(integral&, integral, memory_order = memory_order::seq_cst) const noexcept; bool compare_exchange_strong(integral&, integral, memory_order = memory_order::seq_cst) const noexcept; integral fetch_add(integral, memory_order = memory_order::seq_cst) const noexcept; integral fetch_sub(integral, memory_order = memory_order::seq_cst) const noexcept; integral fetch_and(integral, memory_order = memory_order::seq_cst) const noexcept; integral fetch_or(integral, memory_order = memory_order::seq_cst) const noexcept; integral fetch_xor(integral, memory_order = memory_order::seq_cst) const noexcept; integral operator++(int) const noexcept; integral operator--(int) const noexcept; integral operator++() const noexcept; integral operator--() const noexcept; integral operator+=(integral) const noexcept; integral operator-=(integral) const noexcept; integral operator&=(integral) const noexcept; integral operator|=(integral) const noexcept; integral operator^=(integral) const noexcept; void wait(integral, memory_order = memory_order::seq_cst) const noexcept; void notify_one() const noexcept; void notify_all() const noexcept; }; }

```
integral fetch_key(integral operand, memory_order order = memory_order::seq_cst) const noexcept;
```

Effects: Atomically replaces the value referenced by *ptr with
the result of the computation applied to the value referenced by *ptr
and the given operand.

Memory is affected according to the value of order.

These operations are atomic read-modify-write operations ([intro.races]).

Remarks: For signed integer types,
the result is as if the object value and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type.

```
integral operator op=(integral operand) const noexcept;
```

There are specializations of the atomic_ref class template
for the floating-point types
float,
double, and
long double.

For each such type floating-point,
the specialization atomic_ref<floating-point> provides
additional atomic operations appropriate to floating-point types.

namespace std { template<> struct atomic_ref<floating-point> { private: floating-point* ptr; // exposition only public: using value_type = floating-point; using difference_type = value_type; static constexpr size_t required_alignment = implementation-defined; static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept; explicit atomic_ref(floating-point&); atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete; void store(floating-point, memory_order = memory_order::seq_cst) const noexcept; floating-point operator=(floating-point) const noexcept; floating-point load(memory_order = memory_order::seq_cst) const noexcept; operator floating-point() const noexcept; floating-point exchange(floating-point, memory_order = memory_order::seq_cst) const noexcept; bool compare_exchange_weak(floating-point&, floating-point, memory_order, memory_order) const noexcept; bool compare_exchange_strong(floating-point&, floating-point, memory_order, memory_order) const noexcept; bool compare_exchange_weak(floating-point&, floating-point, memory_order = memory_order::seq_cst) const noexcept; bool compare_exchange_strong(floating-point&, floating-point, memory_order = memory_order::seq_cst) const noexcept; floating-point fetch_add(floating-point, memory_order = memory_order::seq_cst) const noexcept; floating-point fetch_sub(floating-point, memory_order = memory_order::seq_cst) const noexcept; floating-point operator+=(floating-point) const noexcept; floating-point operator-=(floating-point) const noexcept; void wait(floating-point, memory_order = memory_order::seq_cst) const noexcept; void notify_one() const noexcept; void notify_all() const noexcept; }; }

```
floating-point fetch_key(floating-point operand,
memory_order order = memory_order::seq_cst) const noexcept;
```

Effects: Atomically replaces the value referenced by *ptr with
the result of the computation applied to the value referenced by *ptr
and the given operand.

Memory is affected according to the value of order.

These operations are atomic read-modify-write operations ([intro.races]).

Remarks: If the result is not a representable value for its type ([expr.pre]),
the result is unspecified,
but the operations otherwise have no undefined behavior.

Atomic arithmetic operations on floating-point should conform to
the std::numeric_limits<floating-point> traits
associated with the floating-point type ([limits.syn]).

```
floating-point operator op=(floating-point operand) const noexcept;
```

namespace std { template<class T> struct atomic_ref<T*> { private: T** ptr; // exposition only public: using value_type = T*; using difference_type = ptrdiff_t; static constexpr size_t required_alignment = implementation-defined; static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept; explicit atomic_ref(T*&); atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete; void store(T*, memory_order = memory_order::seq_cst) const noexcept; T* operator=(T*) const noexcept; T* load(memory_order = memory_order::seq_cst) const noexcept; operator T*() const noexcept; T* exchange(T*, memory_order = memory_order::seq_cst) const noexcept; bool compare_exchange_weak(T*&, T*, memory_order, memory_order) const noexcept; bool compare_exchange_strong(T*&, T*, memory_order, memory_order) const noexcept; bool compare_exchange_weak(T*&, T*, memory_order = memory_order::seq_cst) const noexcept; bool compare_exchange_strong(T*&, T*, memory_order = memory_order::seq_cst) const noexcept; T* fetch_add(difference_type, memory_order = memory_order::seq_cst) const noexcept; T* fetch_sub(difference_type, memory_order = memory_order::seq_cst) const noexcept; T* operator++(int) const noexcept; T* operator--(int) const noexcept; T* operator++() const noexcept; T* operator--() const noexcept; T* operator+=(difference_type) const noexcept; T* operator-=(difference_type) const noexcept; void wait(T*, memory_order = memory_order::seq_cst) const noexcept; void notify_one() const noexcept; void notify_all() const noexcept; }; }

```
T* fetch_key(difference_type operand, memory_order order = memory_order::seq_cst) const noexcept;
```

Effects: Atomically replaces the value referenced by *ptr with
the result of the computation applied to the value referenced by *ptr
and the given operand.

Memory is affected according to the value of order.

These operations are atomic read-modify-write operations ([intro.races]).

```
T* operator op=(difference_type operand) const noexcept;
```

```
value_type operator++(int) const noexcept;
```

```
value_type operator--(int) const noexcept;
```

```
value_type operator++() const noexcept;
```

```
value_type operator--() const noexcept;
```