```
namespace std {
enum class memory_order : unspecified {
relaxed, consume, acquire, release, acq_rel, seq_cst
};
inline constexpr memory_order memory_order_relaxed = memory_order::relaxed;
inline constexpr memory_order memory_order_consume = memory_order::consume;
inline constexpr memory_order memory_order_acquire = memory_order::acquire;
inline constexpr memory_order memory_order_release = memory_order::release;
inline constexpr memory_order memory_order_acq_rel = memory_order::acq_rel;
inline constexpr memory_order memory_order_seq_cst = memory_order::seq_cst;
}
```

The enumeration memory_order specifies the detailed regular
(non-atomic) memory synchronization order as defined in
[intro.multithread] and may provide for operation ordering.

Its
enumerated values and their meanings are as follows:

- memory_order::relaxed: no operation orders memory.
- memory_order::release, memory_order::acq_rel, and memory_order::seq_cst: a store operation performs a release operation on the affected memory location.
- memory_order::consume: a load operation performs a consume operation on the affected memory location.
- memory_order::acquire, memory_order::acq_rel, and memory_order::seq_cst: a load operation performs an acquire operation on the affected memory location.

An atomic operation A that performs a release operation on an atomic
object M synchronizes with an atomic operation B that performs
an acquire operation on M and takes its value from any side effect in the
release sequence headed by A.

There shall be a single total order S on all memory_order::seq_cst
operations, consistent with the “happens before” order and modification orders for all
affected locations, such that each memory_order::seq_cst operation
B that loads a
value from an atomic object M
observes one of the following values:

- the result of the last modification A of M that precedes B in S, if it exists, or
- if A exists, the result of some modification of M that is not memory_order::seq_cst and that does not happen before A, or

For an atomic operation B that reads the value of an atomic object M,
if there is a memory_order::seq_cst fence X sequenced before B,
then B observes either the last memory_order::seq_cst modification of
M preceding X in the total order S or a later modification of
M in its modification order.

For atomic operations A and B on an atomic object M, where
A modifies M and B takes its value, if there is a
memory_order::seq_cst fence X such that A is sequenced before
X and B follows X in S, then B observes
either the effects of A or a later modification of M in its
modification order.

For atomic operations A and B on an atomic object M, where
A modifies M and B takes its value, if there are
memory_order::seq_cst fences X and Y such that A is
sequenced before X, Y is sequenced before B, and X
precedes Y in S, then B observes either the effects of
A or a later modification of M in its modification order.

For atomic modifications A and B of an atomic object M,
B occurs later than A in the modification order of M if:

- there is a memory_order::seq_cst fence X such that A is sequenced before X, and X precedes B in S, or
- there is a memory_order::seq_cst fence Y such that Y is sequenced before B, and A precedes Y in S, or
- there are memory_order::seq_cst fences X and Y such that A is sequenced before X, Y is sequenced before B, and X precedes Y in S.

[ Note

: *end note*

]memory_order::seq_cst ensures sequential consistency only for a
program that is free of data races and uses exclusively memory_order::seq_cst
operations.

Any use of weaker ordering will invalidate this guarantee unless extreme
care is used.

In particular, memory_order::seq_cst fences ensure a total order
only for the fences themselves.

Fences cannot, in general, be used to restore sequential
consistency for atomic operations with weaker ordering specifications.

— Implementations should ensure that no “out-of-thin-air” values are computed that
circularly depend on their own computation.

[ Note

: *end note*

]For example, with x and y initially zero,

```
// Thread 1:
r1 = y.load(memory_order::relaxed);
x.store(r1, memory_order::relaxed);
```

```
// Thread 2:
r2 = x.load(memory_order::relaxed);
y.store(r2, memory_order::relaxed);
```

should not produce r1 == r2 == 42, since the store of 42 to y is only
possible if the store to x stores 42, which circularly depends on the
store to y storing 42.

Note that without this restriction, such an
execution is possible.

— [ Note

: *end note*

]The recommendation similarly disallows r1 == r2 == 42 in the
following example, with x and y again initially zero:

```
// Thread 1:
r1 = x.load(memory_order::relaxed);
if (r1 == 42) y.store(42, memory_order::relaxed);
```

```
// Thread 2:
r2 = y.load(memory_order::relaxed);
if (r2 == 42) x.store(42, memory_order::relaxed);
```

— Atomic read-modify-write operations shall always read the last value
(in the modification order) written before the write associated with
the read-modify-write operation.

Implementations should make atomic stores visible to atomic loads within a reasonable
amount of time.

```
template <class T>
T kill_dependency(T y) noexcept;
```