32 Concurrency support library [thread]
32.5.7.6 Member operators
                          common to integers and pointers to objects [atomics.ref.memop]
32.5.8.6 Member operators common to integers and pointers to objects [atomics.types.memop]
The following subclauses describe components to create and manage
threads, perform mutual exclusion, and communicate conditions
and values
between threads, as summarized in Table 
154.Throughout this Clause, the names of template parameters are used to express type
requirements
.Let 
pred denote an lvalue of type 
Predicate.The return value of 
pred(), converted to 
bool,
yields 
true if the corresponding test condition is satisfied, and
false otherwise
.If a template parameter is named 
Clock,
the corresponding template argument shall be a type 
C
that meets the 
Cpp17Clock requirements (
[time.clock.req]);
the program is ill-formed if 
is_clock_v<C> is 
false.Some functions described in this Clause are specified to throw exceptions of type
system_error (
[syserr.syserr])
.Such exceptions are thrown if
any of the function's error conditions is detected or
a call to
an operating system or other underlying API results in an error that prevents the
library function from
meeting its specifications
.[
Example 1: 
Consider a function in this Clause that is specified to throw exceptions of type
system_error and specifies error conditions that include
operation_not_permitted for a thread that does not have the privilege to
perform the operation
.Assume that, during the execution of this function, an 
errno
of 
EPERM is reported by a POSIX API call used by the implementation
.Since POSIX
specifies an 
errno of 
EPERM when βthe caller does not have the privilege
to perform the operationβ, the implementation maps 
EPERM to an
error_condition of 
operation_not_permitted (
[syserr]) and an exception
of type 
system_error is thrown
. β 
end example]
The 
error_code reported by such an exception's 
code() member function
compares equal to one of the conditions specified in the function's error condition
element
.Several classes described in this Clause have members 
native_handle_type and
native_handle.The presence of these members and their semantics is
implementation-defined
.[
Note 1: 
These members allow implementations to provide access
to implementation details
.Their names are specified to facilitate portable compile-time
detection
.Actual use of these members is inherently non-portable
. β 
end note]
Several functions described in this Clause take an argument to specify a timeout
.These
timeouts are specified as either a 
duration or a 
time_point type as
specified in 
[time].Implementations necessarily have some delay in returning from a timeout
.Any overhead in
interrupt response, function return, and scheduling induces a βquality of implementationβ
delay, expressed as duration 
Di.Ideally, this delay would be zero
.Further, any contention for
processor and memory resources induces a βquality of managementβ delay, expressed as duration
Dm.The delay durations may vary from timeout to timeout, but in all cases shorter is better
.The functions whose names end in 
_for take an argument that
specifies a duration
.These functions produce relative timeouts
.Implementations
should use a steady clock to measure time for these functions
.
Given a duration
argument 
Dt, the real-time duration of the timeout is 
Dt+Di+Dm.The functions whose names end in 
_until take an argument that specifies a time
point
.These functions produce absolute timeouts
.Implementations should use the clock
specified in the time point to measure time for these functions
.Given a clock time point
argument 
Ct, the clock time point of the return from timeout should be 
Ct+Di+Dm
when the clock is not adjusted during the timeout
.If the clock is adjusted to the time 
Ca
during the timeout, the behavior should be as follows:
- If  Ca>Ct- , the waiting function should wake as soon as possible, i.e.,  Ca+Di+Dm- ,
since the timeout is already satisfied .
- This specification may result in the total
duration of the wait decreasing when measured against a steady clock .
- If  Caβ€Ct- , the waiting function should not time out until  Clock::now()-  returns a
time  Cnβ₯Ct- , i.e., waking at  Ct+Di+Dm.
- [ Note 1- :  - When the clock is adjusted
backwards, this specification can result in the total duration of the wait increasing when
measured against a steady clock .
- When the clock is adjusted forwards, this specification can
result in the total duration of the wait decreasing when measured against a steady clock .
-  β  end note- ] 
An implementation returns from such a timeout at any point from the time specified above to
the time it would return from a steady-clock relative timeout on the difference between 
Ct
and the time point of the call to the 
_until function
.Recommended practice: Implementations
should decrease the duration of the wait when the clock is adjusted forwards
. [
Note 2: 
If the clock is not synchronized with a steady clock, e.g., a CPU time clock, these
timeouts can fail to provide useful functionality
. β 
end note]
The resolution of timing provided by an implementation depends on both operating system
and hardware
.A function that takes an argument which specifies a timeout will throw if,
during its execution, a clock, time point, or time duration throws an exception
.[
Note 3: 
Instantiations of clock, time point and duration types supplied by
the implementation as specified in 
[time.clock] do not throw exceptions
. β 
end note]
An 
execution agent is an entity such as a thread that may perform work in parallel with
other execution agents
.[
Note 1: 
Implementations or users can introduce other kinds of
agents such as processes or thread-pool tasks
. β 
end note]
The calling agent is determined by
context, e.g., the calling thread that contains the call, and so on
.[
Note 2: 
Some lockable objects are βagent obliviousβ in that they work for any
execution agent model because they do not determine or store the agent's ID (e.g., an
ordinary spin lock)
. β 
end note]
  [
Note 3: 
The nature of any lock ownership and any synchronization it entails are not part
of these requirements
. β 
end note]
 A lock on an object 
m is said to be
- a non-shared lock if it is acquired by a call to
lock,
try_lock,
try_lock_for, or
try_lock_until on m, or
- a shared lock if it is acquired by a call to
lock_shared,
try_lock_shared,
try_lock_shared_for, or
try_lock_shared_until on m.
[
Note 4: 
Only the method of lock acquisition is considered;
the nature of any lock ownership is not part of these definitions
. β 
end note]
A type 
L meets the 
Cpp17BasicLockable requirements if the following expressions are
well-formed and have the specified semantics (
m denotes a value of type 
L)
.Effects: Blocks until a lock can be acquired for the current execution agent
.  If an exception
is thrown then a lock shall not have been acquired for the current execution agent
.Preconditions: The current execution agent holds a non-shared lock on 
m. Effects: Releases a non-shared lock on 
m held by the current execution agent
. A type 
L meets the 
Cpp17Lockable requirements if it meets the 
Cpp17BasicLockable
requirements and the following expressions are well-formed and have the specified semantics
(
m denotes a value of type 
L)
.Effects: Attempts to acquire a lock for the current execution agent without blocking
.  If an
exception is thrown then a lock shall not have been acquired for the current execution agent
.Returns: 
true if the lock was acquired, otherwise 
false. A type 
L meets the 
Cpp17TimedLockable requirements if it meets the 
Cpp17Lockable
requirements and the following expressions are well-formed and have the specified semantics
(
m denotes a value of type 
L, 
rel_time denotes a value of an
instantiation of 
duration, and 
abs_time denotes a value
of an instantiation of 
time_point)
.Effects: Attempts to acquire a lock for the current execution agent within the relative
timeout (
[thread.req.timing]) specified by 
rel_time.  The function will not return
within the timeout specified by 
rel_time unless it has obtained a lock on 
m
for the current execution agent
.If an exception is thrown then a lock has not been
acquired for the current execution agent
.Returns: 
true if the lock was acquired, otherwise 
false. m.try_lock_until(abs_time)
Effects: Attempts to acquire a lock for the current execution agent before the absolute
timeout (
[thread.req.timing]) specified by 
abs_time.  The function will not return
before the timeout specified by 
abs_time unless it has obtained a lock on 
m for
the current execution agent
.If an exception is thrown then a lock has not been acquired
for the current execution agent
.Returns: 
true if the lock was acquired, otherwise 
false. A type L meets the Cpp17SharedLockable requirements if
the following expressions are well-formed, have the specified semantics, and
the expression m.try_lock_shared() has type bool
(m denotes a value of type L):
Effects: Blocks until a lock can be acquired for the current execution agent
.  If an exception is thrown then a lock shall not have been acquired for
the current execution agent
.Effects: Attempts to acquire a lock for the current execution agent without blocking
.  If an exception is thrown then a lock shall not have been acquired for
the current execution agent
.Returns: 
true if the lock was acquired, 
false otherwise
. Preconditions: The current execution agent holds a shared lock on 
m. Effects: Releases a shared lock on 
m held by the current execution agent
. A type 
L meets the 
Cpp17SharedTimedLockable requirements if
it meets the 
Cpp17SharedLockable requirements, and
the following expressions are well-formed, have type 
bool, and
have the specified semantics
(
m denotes a value of type 
L,
rel_time denotes a value of a specialization of 
chrono::duration, and
abs_time denotes a value of a specialization of 
chrono::time_point)
.m.try_lock_shared_for(rel_time)
Effects: Attempts to acquire a lock for the current execution agent within
the relative timeout (
[thread.req.timing]) specified by 
rel_time.  The function will not return within the timeout specified by 
rel_time
unless it has obtained a lock on 
m for the current execution agent
.If an exception is thrown then a lock has not been acquired for
the current execution agent
.Returns: 
true if the lock was acquired, 
false otherwise
. m.try_lock_shared_until(abs_time)
Effects: Attempts to acquire a lock for the current execution agent before
the absolute timeout (
[thread.req.timing]) specified by 
abs_time.  The function will not return before the timeout specified by 
abs_time
unless it has obtained a lock on 
m for the current execution agent
.If an exception is thrown then a lock has not been acquired for
the current execution agent
.Returns: 
true if the lock was acquired, 
false otherwise
. Subclause 
[thread.stoptoken] describes components that can be used
to asynchronously request that an operation stops execution in a timely manner,
typically because the result is no longer required
.An object of a type that models 
stoppable_token
can be passed to an operation that can either
- actively poll the token to check if there has been a stop request, or
- register a callback that
        will be called in the event that a stop request is made.
Once a stop request has been made it cannot be withdrawn
(a subsequent stop request has no effect)
.The types 
stop_source and 
stop_token and
the class template 
stop_callback implement
the semantics of shared ownership of a stop state
.The last remaining owner of the stop state automatically releases
the resources associated with the stop state
.An object of type 
inplace_stop_source
is the sole owner of its stop state
.An object of type 
inplace_stop_token or
of a specialization of the class template 
inplace_stop_callback
does not participate in ownership of its associated stop state
.[
Note 1: 
They are for use when all uses of the associated token and callback objects
are known to nest within the lifetime of the 
inplace_stop_source object
. β 
end note]
Let 
t and 
u be distinct, valid objects of type 
Token
that reference the same logical stop state;
let 
init be an expression such that
same_as<decltype(init), Initializer> is 
true; and
let 
SCB denote the type 
stop_callback_for_t<Token, CallbackFn>.The concept
stoppable-callback-for<CallbackFn, Token, Initializer>
is modeled only if:
- The following concepts are modeled:
 
- An object of type  SCB-  has
an associated callback function of type  CallbackFn.
- Let  scb-  be an object of type  SCB-  and
let  callback_fn-  denote  scb's'-  associated callback function .
- Direct-non-list-initializing  scb-  from
arguments  t-  and  init- 
shall execute a  stoppable callback registration-  as follows:
 - If t.stop_possible() is true:
- callback_fn-  shall be direct-initialized with  init.
 
- Construction of  scb-  shall only throw exceptions
thrown by the initialization of  callback_fn-  from  init.
- The callback invocation  std::forward<CallbackFn>(callback_fn)()- 
shall be registered with  t- 's associated stop state as follows:
 - If  t.stop_requested()-  evaluates to  false- 
at the time of registration,
the callback invocation is added to the stop state's list of callbacks
such that  std::forward<CallbackFn>(
 callback_fn)()-  is evaluated
if a stop request is made on the stop state .
- Otherwise,  std::forward<CallbackFn>(callback_fn)()- 
shall be immediately evaluated
on the thread executing  scb- 's constructor, and
the callback invocation shall not be added to the list of callback invocations .
 
- 
If the callback invocation was added to stop state's list of callbacks,
 scb-  shall be associated with the stop state .
 
- [ Note 1- :  - If  t.stop_possible()-  is  false- ,
there is no requirement
that the initialization of  scb- 
causes the initialization of  callback_fn.
-  β  end note- ] 
 
- Destruction of  scb-  shall execute
a  stoppable callback deregistration-  as follows (in order):
 - If the constructor of  scb-  did not register
a callback invocation with  t- 's stop state,
then the stoppable callback deregistration shall have no effect
other than destroying  callback_fn-  if it was constructed .
- Otherwise, the invocation of  callback_fn-  shall be removed
from the associated stop state .
- If  callback_fn-  is concurrently executing on another thread,
then the stoppable callback deregistration shall block ( [defns.block]- )
until the invocation of  callback_fn-  returns
such that the return from the invocation of  callback_fn- 
strongly happens before ( [intro.races]- )
the destruction of  callback_fn.
- If  callback_fn-  is executing on the current thread,
then the destructor shall not block
waiting for the return from the invocation of  callback_fn.
- A stoppable callback deregistration shall not block
on the completion of the invocation of some other callback
registered with the same logical stop state .
- The stoppable callback deregistration shall destroy  callback_fn.
 
The 
stoppable_token concept checks
for the basic interface of a stop token
that is copyable and allows polling to see if stop has been requested and
also whether a stop request is possible
.An object whose type models 
stoppable_token
has at most one associated logical stop state
.Let 
SP be an evaluation of 
t.stop_possible()
that is 
false, and
let SR be an evaluation of 
t.stop_requested() that is 
true.The type 
Token models 
stoppable_token only if:
- Any evaluation of  u.stop_possible()-  or  u.stop_requested()- 
that happens after ( [intro.races]- )  SP-  is  false.
- Any evaluation of  u.stop_possible()-  or  u.stop_requested()- 
that happens after  SR-  is  true.
- If  t-  is disengaged,
evaluations of  t.stop_possible()-  and  t.stop_requested()- 
are  false.
- If  t-  and  u-  reference the same stop state, or
if both  t-  and  u-  are disengaged,
 t == u-  is  true- ; otherwise, it is  false.
An object
whose type models the exposition-only 
stoppable-source concept
can be queried
whether stop has been requested (
stop_requested) and
whether stop is possible (
stop_possible)
.It is a factory for associated stop tokens (
get_token), and
a stop request can be made on it (
request_stop)
.It maintains a list of registered stop callback invocations
that it executes when a stop request is first made
.An object whose type models 
stoppable-source has
at most one associated logical stop state
.If it has no associated stop state, it is said to be disengaged
.s.stop_possible() and 
s.stop_requested() shall be 
false.  If 
t is disengaged,
t.get_token() shall return a disengaged stop token;
otherwise, it shall return
a stop token that is associated with the stop state of 
t. Calls to the member functions
request_stop, 
stop_requested, and 
stop_possible and
similarly named member functions
on associated 
stoppable_token objects
do not introduce data races
.A call to 
request_stop that returns 
true synchronizes with
a call to 
stop_requested on
an associated
stoppable_token or 
stoppable-source object
that returns 
true.Registration of a callback synchronizes with the invocation of that callback
.If the 
stoppable-source is disengaged,
request_stop shall have no effect and return 
false.A stop request operation determines
whether the stop state has received a stop request, and
if not, makes a stop request
.The determination and making of the stop request shall happen atomically,
as-if by a read-modify-write operation (
[intro.races])
.If the request was made,
the stop state's registered callback invocations shall be
synchronously executed
.If an invocation of a callback exits via an exception
then 
terminate shall be invoked (
[except.terminate])
.[
Note 2: 
No constraint is placed on the order
in which the callback invocations are executed
. β 
end note]
request_stop shall return 
true if a stop request was made, and
false otherwise
.  After a call to 
request_stop either
a call to 
stop_possible shall return 
false or
a call to 
stop_requested shall return 
true.[
Note 3: 
A stop request includes notifying
all condition variables of type 
condition_variable_any
temporarily registered during
an interruptible wait (
[thread.condvarany.intwait])
. β 
end note]
 It shares ownership of its stop state, if any,
with its associated 
stop_source object (
[stopsource]) and
any 
stop_token objects to which it compares equal
. namespace std {
  class stop_token {
  public:
    template<class CallbackFn>
      using callback_type = stop_callback<CallbackFn>;
    stop_token() noexcept = default;
    
    void swap(stop_token&) noexcept;
    bool stop_requested() const noexcept;
    bool stop_possible() const noexcept;
    bool operator==(const stop_token& rhs) noexcept = default;
  private:
    shared_ptr<unspecified> stop-state;                           
  };
}
stop-state refers to the 
stop_token's associated stop state
.  A 
stop_token object is disengaged when 
stop-state is empty
.void swap(stop_token& rhs) noexcept;
Effects: Equivalent to:
stop-state.swap(rhs.stop-state);
bool stop_requested() const noexcept;
Returns: 
true if 
stop-state refers to a stop state
that has received a stop request;
otherwise, 
false. bool stop_possible() const noexcept;
Returns: 
false if
- *this is disengaged, or
- a stop request was not made
      and there are no associated stop_source objects;
otherwise, 
true. namespace std {
  class stop_source {
  public:
    
    stop_source();
    explicit stop_source(nostopstate_t) noexcept {}
    
    void swap(stop_source&) noexcept;
    stop_token get_token() const noexcept;
    bool stop_possible() const noexcept;
    bool stop_requested() const noexcept;
    bool request_stop() noexcept;
    bool operator==(const stop_source& rhs) noexcept = default;
  private:
    shared_ptr<unspecified> stop-state;                         
  };
}
stop-state refers to the 
stop_source's associated stop state
.  A 
stop_source object is disengaged when 
stop-state is empty
.Effects: Initializes 
stop-state with a pointer to a new stop state
. Postconditions: 
stop_possible() is 
true
and 
stop_requested() is 
false. Throws: 
bad_alloc if memory cannot be allocated for the stop state
. void swap(stop_source& rhs) noexcept;
Effects: Equivalent to:
stop-state.swap(rhs.stop-state);
stop_token get_token() const noexcept;
Returns: 
stop_token() if 
stop_possible() is 
false;
otherwise a new associated 
stop_token object;
i.e., its 
stop-state member is equal to
the 
stop-state member of 
*this. bool stop_possible() const noexcept;
Returns: 
stop-state != nullptr. bool stop_requested() const noexcept;
Returns: 
true if 
stop-state refers to a stop state
that has received a stop request;
otherwise, 
false. bool request_stop() noexcept;
namespace std {
  template<class CallbackFn>
  class stop_callback {
  public:
    using callback_type = CallbackFn;
    
    template<class Initializer>
      explicit stop_callback(const stop_token& st, Initializer&& init)
        noexcept(is_nothrow_constructible_v<CallbackFn, Initializer>);
    template<class Initializer>
      explicit stop_callback(stop_token&& st, Initializer&& init)
        noexcept(is_nothrow_constructible_v<CallbackFn, Initializer>);
    ~stop_callback();
    stop_callback(const stop_callback&) = delete;
    stop_callback(stop_callback&&) = delete;
    stop_callback& operator=(const stop_callback&) = delete;
    stop_callback& operator=(stop_callback&&) = delete;
  private:
    CallbackFn callback-fn;                                     
  };
  template<class CallbackFn>
    stop_callback(stop_token, CallbackFn) -> stop_callback<CallbackFn>;
}
 Mandates: 
stop_callback is instantiated with an argument for the
template parameter 
CallbackFn
that satisfies both 
invocable
and 
destructible.  The exposition-only 
callback-fn member is
the associated callback function (
[stoptoken.concepts]) of
stop_callback<
CallbackFn> objects
. template<class Initializer>
  explicit stop_callback(const stop_token& st, Initializer&& init)
    noexcept(is_nothrow_constructible_v<CallbackFn, Initializer>);
template<class Initializer>
  explicit stop_callback(stop_token&& st, Initializer&& init)
    noexcept(is_nothrow_constructible_v<CallbackFn, Initializer>);
Effects: Initializes 
callback-fn with 
std::forward<Initializer>(init)
and executes a stoppable callback registration (
[stoptoken.concepts])
.  If a callback is registered with 
st's shared stop state,
then 
*this acquires shared ownership of that stop state
.Effects: Executes a stoppable callback deregistration (
[stoptoken.concepts]) and
releases ownership of the stop state, if any
.  It provides a stop token interface,
but also provides static information
that a stop is never possible nor requested
.  It references the stop state of
its associated 
inplace_stop_source object (
[stopsource.inplace]),
if any
. void swap(inplace_stop_token& rhs) noexcept;
Effects: Exchanges the values of 
stop-source and 
rhs.stop-source. bool stop_requested() const noexcept;
Effects: Equivalent to:
return stop-source != nullptr && stop-source->stop_requested();
[
Note 1: 
As specified in 
[basic.life],
the behavior of 
stop_requested is undefined
unless the call strongly happens before the start of
the destructor of the associated 
inplace_stop_source object, if any
. β 
end note]
stop_possible() const noexcept;
Returns: 
stop-source != nullptr. [
Note 2: 
As specified in 
[basic.stc.general],
the behavior of 
stop_possible is implementation-defined
unless the call strongly happens before
the end of the storage duration of
the associated 
inplace_stop_source object, if any
. β 
end note]
namespace std {
  class inplace_stop_source {
  public:
    
    constexpr inplace_stop_source() noexcept;
    inplace_stop_source(inplace_stop_source&&) = delete;
    inplace_stop_source(const inplace_stop_source&) = delete;
    inplace_stop_source& operator=(inplace_stop_source&&) = delete;
    inplace_stop_source& operator=(const inplace_stop_source&) = delete;
    ~inplace_stop_source();
    
    constexpr inplace_stop_token get_token() const noexcept;
    static constexpr bool stop_possible() noexcept { return true; }
    bool stop_requested() const noexcept;
    bool request_stop() noexcept;
  };
}
constexpr inplace_stop_source() noexcept;
Effects: Initializes a new stop state inside 
*this. Postconditions: 
stop_requested() is 
false. constexpr inplace_stop_token get_token() const noexcept;
Returns: A new associated 
inplace_stop_token object
whose 
stop-source member is equal to 
this. bool stop_requested() const noexcept;
Returns: 
true if the stop state inside 
*this
has received a stop request; otherwise, 
false. bool request_stop() noexcept;
Postconditions: 
stop_requested() is 
true. namespace std {
  template<class CallbackFn>
  class inplace_stop_callback {
  public:
    using callback_type = CallbackFn;
    
    template<class Initializer>
      explicit inplace_stop_callback(inplace_stop_token st, Initializer&& init)
        noexcept(is_nothrow_constructible_v<CallbackFn, Initializer>);
    ~inplace_stop_callback();
    inplace_stop_callback(inplace_stop_callback&&) = delete;
    inplace_stop_callback(const inplace_stop_callback&) = delete;
    inplace_stop_callback& operator=(inplace_stop_callback&&) = delete;
    inplace_stop_callback& operator=(const inplace_stop_callback&) = delete;
  private:
    CallbackFn callback-fn;                                     
  };
  template<class CallbackFn>
    inplace_stop_callback(inplace_stop_token, CallbackFn)
      -> inplace_stop_callback<CallbackFn>;
}
 For an 
inplace_stop_callback<CallbackFn> object,
the exposition-only 
callback-fn member is
its associated callback function (
[stoptoken.concepts])
. template<class Initializer>
  explicit inplace_stop_callback(inplace_stop_token st, Initializer&& init)
    noexcept(is_nothrow_constructible_v<CallbackFn, Initializer>);
Effects: Initializes 
callback-fn with 
std::forward<Initializer>(init)
and executes a stoppable callback registration (
[stoptoken.concepts])
. ~inplace_stop_callback();
 [
Note 1: 
These threads are intended to map one-to-one with operating system threads
. β 
end note]
 The class 
thread provides a mechanism to create a new thread of execution, to join with
a thread (i.e., wait for a thread to complete), and to perform other operations that manage and
query the state of a thread
.A 
thread object uniquely represents a particular thread of
execution
.That representation may be transferred to other 
thread objects in such a way
that no two 
thread objects simultaneously represent the same thread of execution
.A
thread of execution is 
detached when no 
thread object represents that thread
.Objects of class 
thread can be in a state that does not represent a thread of
execution
.[
Note 1: 
A 
thread object does not represent a thread of execution after
default construction, after being moved from, or after a successful call to 
detach or
join. β 
end note]
namespace std {
  class thread {
  public:
    
    class id;
    using native_handle_type = implementation-defined;         
    
    thread() noexcept;
    template<class F, class... Args> explicit thread(F&& f, Args&&... args);
    ~thread();
    thread(const thread&) = delete;
    thread(thread&&) noexcept;
    thread& operator=(const thread&) = delete;
    thread& operator=(thread&&) noexcept;
    
    void swap(thread&) noexcept;
    bool joinable() const noexcept;
    void join();
    void detach();
    id get_id() const noexcept;
    native_handle_type native_handle();                         
    
    static unsigned int hardware_concurrency() noexcept;
  };
}
 namespace std {
  class thread::id {
  public:
    id() noexcept;
  };
  bool operator==(thread::id x, thread::id y) noexcept;
  strong_ordering operator<=>(thread::id x, thread::id y) noexcept;
  template<class charT, class traits>
    basic_ostream<charT, traits>&
      operator<<(basic_ostream<charT, traits>& out, thread::id id);
  template<class charT> struct formatter<thread::id, charT>;
  
  template<class T> struct hash;
  template<> struct hash<thread::id>;
}
 An object of type 
thread::id provides a unique identifier for
each thread of execution and a single distinct value for all 
thread
objects that do not represent a thread of
execution (
[thread.thread.class])
.Each thread of execution has an
associated 
thread::id object that is not equal to the
thread::id object of any other thread of execution and that is not
equal to the 
thread::id object of any 
thread object that
does not represent threads of execution
.The 
text representation for
the character type 
charT of an object of type 
thread::id
is an unspecified sequence of 
charT such that,
for two objects of type 
thread::id x and 
y,
if 
x == y is 
true,
the 
thread::id objects have the same text representation, and
if 
x != y is 
true,
the 
thread::id objects have distinct text representations
. The library may reuse the value of a 
thread::id of a terminated thread that can no longer be joined
. [
Note 1: 
Relational operators allow 
thread::id objects to be used as
keys in associative containers
. β 
end note]
Postconditions: The constructed object does not represent a thread of execution
. bool operator==(thread::id x, thread::id y) noexcept;
Returns: 
true only if 
x and 
y represent the same
thread of execution or neither 
x nor 
y represents a thread of
execution
. strong_ordering operator<=>(thread::id x, thread::id y) noexcept;
Let 
P(x, y) be
an unspecified total ordering over 
thread::id
as described in 
[alg.sorting].Returns: 
strong_ordering::less if 
P(x, y) is 
true.  Otherwise, 
strong_ordering::greater
if 
P(y, x) is 
true.Otherwise, 
strong_ordering::equal.template<class charT, class traits>
  basic_ostream<charT, traits>&
    operator<<(basic_ostream<charT, traits>& out, thread::id id);
Effects: Inserts the text representation for 
charT of 
id into
out. formatter<thread::id, charT> interprets 
format-spec
as a 
thread-id-format-spec.  The syntax of format specifications is as follows:
thread-id-format-spec :
fill-and-alignopt widthopt
If the 
align option is omitted it defaults to 
>.A 
thread::id object is formatted by
writing its text representation for 
charT to the output
with additional padding and adjustments as specified by the format specifiers
.template<> struct hash<thread::id>;
Effects: The object does not represent a thread of execution
. Postconditions: 
get_id() == id(). template<class F, class... Args> explicit thread(F&& f, Args&&... args);
Constraints: 
remove_cvref_t<F> is not the same type as 
thread. Mandates: The following are all 
true:
- is_constructible_v<decay_t<F>, F>,
- (is_constructible_v<decay_t<Args>, Args> && ...), and
- is_invocable_v<decay_t<F>, decay_t<Args>...>.
 Effects: The new thread of execution executes
invoke(auto(std::forward<F>(f)),                
       auto(std::forward<Args>(args))...)
with the values produced by 
auto
being materialized (
[conv.rval]) in the constructing thread
.  Any return value from this invocation is ignored
.[
Note 1: 
This implies that any exceptions not thrown from the invocation of the copy
of 
f will be thrown in the constructing thread, not the new thread
. β 
end note]
If the invocation of 
invoke terminates with an uncaught exception,
terminate is invoked (
[except.terminate])
.Synchronization: The completion of the invocation of the constructor
synchronizes with the beginning of the invocation of the copy of 
f. Postconditions: 
get_id() != id().  *this represents the newly started thread
. Throws: 
system_error if unable to start the new thread
. Error conditions: 
- resource_unavailable_try_again β the system lacked the necessary
resources to create another thread, or the system-imposed limit on the number of
threads in a process would be exceeded.
 thread(thread&& x) noexcept;
Postconditions: 
x.get_id() == id() and 
get_id() returns the
value of 
x.get_id() prior to the start of construction
.  Otherwise, has no effects
.[
Note 1: 
Either implicitly detaching or joining a 
joinable() thread in its
destructor can result in difficult to debug correctness (for detach) or performance
(for join) bugs encountered only when an exception is thrown
.These bugs can be avoided by ensuring that
the destructor is never executed while the thread is still joinable
. β 
end note]
 thread& operator=(thread&& x) noexcept;
 Otherwise, assigns the
state of 
x to 
*this and sets 
x to a default constructed state
. Postconditions: 
x.get_id() == id() and 
get_id() returns the value of
x.get_id() prior to the assignment
. void swap(thread& x) noexcept;
Effects: Swaps the state of 
*this and 
x. bool joinable() const noexcept;
Returns: 
get_id() != id(). Effects: Blocks until the thread represented by 
*this has completed
. Synchronization: The completion of the thread represented by 
*this synchronizes with (
[intro.multithread])
the corresponding successful
join() return
.  [
Note 1: 
Operations on
*this are not synchronized
. β 
end note]
Postconditions: The thread represented by 
*this has completed
.  Error conditions: 
- resource_deadlock_would_occur-  β if deadlock is detected or
 get_id() == this_thread::get_id().
 
- no_such_process-  β if the thread is not valid .
 
- invalid_argument-  β if the thread is not joinable .
 
 Effects: The thread represented by 
*this continues execution without the calling thread
blocking
.  When 
detach() returns, 
*this no longer represents the possibly continuing
thread of execution
.When the thread previously represented by 
*this ends execution, the
implementation releases any owned resources
.Postconditions: 
get_id() == id(). Error conditions: 
- no_such_process-  β if the thread is not valid .
 
- invalid_argument-  β if the thread is not joinable .
 
 id get_id() const noexcept;
Returns: A default constructed 
id object if 
*this does not represent a thread,
otherwise 
this_thread::get_id() for the thread of execution represented by
*this. unsigned hardware_concurrency() noexcept;
Returns: The number of hardware thread contexts
.  [
Note 1: 
This value should
only be considered to be a hint
. β 
end note]
If this value is not computable or
well-defined, an implementation should return 0
.void swap(thread& x, thread& y) noexcept;
Effects: As if by 
x.swap(y). The class 
jthread provides a mechanism
to create a new thread of execution
.The functionality is the same as for
class 
thread (
[thread.thread.class])
with the additional abilities to provide
a 
stop_token (
[thread.stoptoken]) to the new thread of execution,
make stop requests, and automatically join
.namespace std {
  class jthread {
  public:
    
    using id = thread::id;
    using native_handle_type = thread::native_handle_type;
    
    jthread() noexcept;
    template<class F, class... Args> explicit jthread(F&& f, Args&&... args);
    ~jthread();
    jthread(const jthread&) = delete;
    jthread(jthread&&) noexcept;
    jthread& operator=(const jthread&) = delete;
    jthread& operator=(jthread&&) noexcept;
    
    void swap(jthread&) noexcept;
    bool joinable() const noexcept;
    void join();
    void detach();
    id get_id() const noexcept;
    native_handle_type native_handle();                 
    
    stop_source get_stop_source() noexcept;
    stop_token get_stop_token() const noexcept;
    bool request_stop() noexcept;
    
    friend void swap(jthread& lhs, jthread& rhs) noexcept;
    
    static unsigned int hardware_concurrency() noexcept;
  private:
    stop_source ssource;        
  };
}
 Effects: Constructs a 
jthread object that does not represent
a thread of execution
. Postconditions: 
get_id() == id() is 
true
and 
ssource.stop_possible() is 
false. template<class F, class... Args> explicit jthread(F&& f, Args&&... args);
Constraints: 
remove_cvref_t<F> is not the same type as 
jthread. Mandates: The following are all 
true:
- is_constructible_v<decay_t<F>, F>,
- (is_constructible_v<decay_t<Args>, Args> && ...), and
- is_invocable_v<decay_t<F>, decay_t<Args>...> || 
  is_invocable_v<decay_t<F>, stop_token, decay_t<Args>...>. 
 Effects: Initializes 
ssource.  The new thread of execution executes
invoke(auto(std::forward<F>(f)), get_stop_token(),  
       auto(std::forward<Args>(args))...)
if that expression is well-formed,
otherwise
invoke(auto(std::forward<F>(f)), auto(std::forward<Args>(args))...)
with the values produced by 
auto
being materialized (
[conv.rval]) in the constructing thread
.Any return value from this invocation is ignored
.[
Note 1: 
This implies that any exceptions not thrown from the invocation of the copy
of 
f will be thrown in the constructing thread, not the new thread
. β 
end note]
If the 
invoke expression exits via an exception,
terminate is called
.Synchronization: The completion of the invocation of the constructor
synchronizes with the beginning of the invocation of the copy of 
f. Postconditions: 
get_id() != id() is 
true
and 
ssource.stop_possible() is 
true
and 
*this represents the newly started thread
.  [
Note 2: 
The calling thread can make a stop request only once,
because it cannot replace this stop token
. β 
end note]
Throws: 
system_error if unable to start the new thread
. Error conditions: 
- resource_unavailable_try_again β the system lacked
the necessary resources to create another thread,
or the system-imposed limit on the number of threads in a process
would be exceeded.
 jthread(jthread&& x) noexcept;
Postconditions: 
x.get_id() == id()
and 
get_id() returns the value of 
x.get_id()
prior to the start of construction
.  ssource has the value of 
x.ssource
prior to the start of construction
and 
x.ssource.stop_possible() is 
false. Effects: If 
joinable() is 
true,
calls 
request_stop() and then 
join().  [
Note 3: 
Operations on 
*this are not synchronized
. β 
end note]
jthread& operator=(jthread&& x) noexcept;
Effects: If 
&x == this is 
true, there are no effects
.  Otherwise, if 
joinable() is 
true,
calls 
request_stop() and then 
join(),
then assigns the state of 
x to 
*this
and sets 
x to a default constructed state
.Postconditions: 
get_id() returns the value of 
x.get_id()
prior to the assignment
.  ssource has the value of 
x.ssource
prior to the assignment
. void swap(jthread& x) noexcept;
Effects: Exchanges the values of 
*this and 
x. bool joinable() const noexcept;
Returns: 
get_id() != id(). Effects: Blocks until the thread represented by 
*this has completed
. Synchronization: The completion of the thread represented by 
*this
synchronizes with (
[intro.multithread])
the corresponding successful 
join() return
.  [
Note 1: 
Operations on 
*this are not synchronized
. β 
end note]
Postconditions: The thread represented by 
*this has completed
.  Error conditions: 
- resource_deadlock_would_occur-  β if deadlock is detected or
 get_id() == this_thread::get_id().
 
- no_such_process-  β if the thread is not valid .
 
- invalid_argument-  β if the thread is not joinable .
 
 Effects: The thread represented by 
*this continues execution
without the calling thread blocking
.  When 
detach() returns,
*this no longer represents the possibly continuing thread of execution
.When the thread previously represented by 
*this ends execution,
the implementation releases any owned resources
.Postconditions: 
get_id() == id(). Error conditions: 
- no_such_process-  β if the thread is not valid .
 
- invalid_argument-  β if the thread is not joinable .
 
 id get_id() const noexcept;
Returns: A default constructed 
id object
if 
*this does not represent a thread,
otherwise 
this_thread::get_id()
for the thread of execution represented by 
*this. stop_source get_stop_source() noexcept;
Effects: Equivalent to: return ssource;
stop_token get_stop_token() const noexcept;
Effects: Equivalent to: return ssource.get_token();
bool request_stop() noexcept;
Effects: Equivalent to: return ssource.request_stop();
friend void swap(jthread& x, jthread& y) noexcept;
Effects: Equivalent to: 
x.swap(y). static unsigned int hardware_concurrency() noexcept;
Returns: 
thread::hardware_concurrency(). namespace std::this_thread {
  thread::id get_id() noexcept;
  void yield() noexcept;
  template<class Clock, class Duration>
    void sleep_until(const chrono::time_point<Clock, Duration>& abs_time);
  template<class Rep, class Period>
    void sleep_for(const chrono::duration<Rep, Period>& rel_time);
}
thread::id this_thread::get_id() noexcept;
Returns: An object of type 
thread::id
that uniquely identifies the current thread of execution
.  Every invocation from this thread of execution returns the same value
.The object returned does not compare equal to
a default-constructed 
thread::id.void this_thread::yield() noexcept;
Effects: Offers the implementation the opportunity to reschedule
. template<class Clock, class Duration>
  void sleep_until(const chrono::time_point<Clock, Duration>& abs_time);
Effects: Blocks the calling thread for the absolute timeout (
[thread.req.timing]) specified
by 
abs_time. template<class Rep, class Period>
  void sleep_for(const chrono::duration<Rep, Period>& rel_time);
Effects: Blocks the calling thread for the relative timeout (
[thread.req.timing]) specified
by 
rel_time. Subclause 
[atomics] describes components for fine-grained atomic access
.This access is provided via operations on atomic objects
.The type aliases 
atomic_intN_t, 
atomic_uintN_t,
atomic_intptr_t, and 
atomic_uintptr_t
are defined if and only if
intN_t, 
uintN_t,
intptr_t, and 
uintptr_t
are defined, respectively
.The type aliases
atomic_signed_lock_free and 
atomic_unsigned_lock_free
name specializations of 
atomic
whose template arguments are integral types, respectively signed and unsigned,
and whose 
is_always_lock_free property is 
true.[
Note 1: 
These aliases are optional in freestanding implementations (
[compliance])
. β 
end note]
Implementations should choose for these aliases
the integral specializations of 
atomic
for which the atomic waiting and notifying operations (
[atomics.wait])
are most efficient
.
namespace std {
  enum class memory_order : unspecified {
    relaxed = 0, acquire = 2, release = 3, acq_rel = 4, seq_cst = 5
  };
}
 The enumeration 
memory_order specifies the detailed regular
(non-atomic) memory synchronization order as defined in
[intro.multithread] and may provide for operation ordering
.Its
enumerated values and their meanings are as follows:
- memory_order::relaxed- : no operation orders memory .
 
- memory_order::release- ,  memory_order::acq_rel- , and
 memory_order::seq_cst- : a store operation performs a release operation on the
affected memory location .
 
- memory_order::acquire- ,  memory_order::acq_rel- , and
 memory_order::seq_cst- : a load operation performs an acquire operation on the
affected memory location .
 
[
Note 1: 
Atomic operations specifying 
memory_order::relaxed are relaxed
with respect to memory ordering
.Implementations must still guarantee that any
given atomic access to a particular atomic object be indivisible with respect
to all other atomic accesses to that object
. β 
end note]
An atomic operation 
A that performs a release operation on an atomic
object 
M synchronizes with an atomic operation 
B that performs
an acquire operation on 
M and takes its value from any side effect in the
release sequence headed by 
A.An atomic operation 
A on some atomic object 
M is
coherence-ordered before
another atomic operation 
B on 
M if
- A is a modification, and
B reads the value stored by A, or
- A precedes B
in the modification order of M, or
- A and B are not
the same atomic read-modify-write operation, and
there exists an atomic modification X of M
such that A reads the value stored by X and
X precedes B
in the modification order of M, or
- there exists an atomic modification X of M
such that A is coherence-ordered before X and
X is coherence-ordered before B.
There is a single total order 
S
on all 
memory_order::seq_cst operations, including fences,
that satisfies the following constraints
.First, if 
A and 
B are
memory_order::seq_cst operations and
A strongly happens before 
B,
then 
A precedes 
B in 
S.Second, for every pair of atomic operations 
A and
B on an object 
M,
where 
A is coherence-ordered before 
B,
the following four conditions are required to be satisfied by 
S:
- if A and B are both
memory_order::seq_cst operations,
then A precedes B in S; and
- if A is a memory_order::seq_cst operation and
B happens before
a memory_order::seq_cst fence Y,
then A precedes Y in S; and
- if a memory_order::seq_cst fence X
happens before A and
B is a memory_order::seq_cst operation,
then X precedes B in S; and
- if a memory_order::seq_cst fence X
happens before A and
B happens before
a memory_order::seq_cst fence Y,
then X precedes Y in S.
[
Note 2: 
This definition ensures that 
S is consistent with
the modification order of any atomic object 
M.It also ensures that
a 
memory_order::seq_cst load 
A of 
M
gets its value either from the last modification of 
M
that precedes 
A in 
S or
from some non-
memory_order::seq_cst modification of 
M
that does not happen before any modification of 
M
that precedes 
A in 
S. β 
end note]
[
Note 3: 
We do not require that 
S be consistent with
βhappens beforeβ (
[intro.races])
.This allows more efficient implementation
of 
memory_order::acquire and 
memory_order::release
on some machine architectures
.It can produce surprising results
when these are mixed with 
memory_order::seq_cst accesses
. β 
end note]
[
Note 4: 
memory_order::seq_cst ensures sequential consistency only
for a program that is free of data races and
uses exclusively 
memory_order::seq_cst atomic operations
.  Any use of weaker ordering will invalidate this guarantee
unless extreme care is used
.In many cases, 
memory_order::seq_cst atomic operations are reorderable
with respect to other atomic operations performed by the same thread
. β 
end note]
Implementations should ensure that no βout-of-thin-airβ values are computed that
circularly depend on their own computation
.[
Note 5: 
For example, with x and y initially zero,
r1 = y.load(memory_order::relaxed);
x.store(r1, memory_order::relaxed);
r2 = x.load(memory_order::relaxed);
y.store(r2, memory_order::relaxed);
this recommendation discourages producing 
r1 == r2 == 42, since the store of 42 to 
y is only
possible if the store to 
x stores 
42, which circularly depends on the
store to 
y storing 
42.  Note that without this restriction, such an
execution is possible
. β 
end note]
[
Note 6: 
The recommendation similarly disallows r1 == r2 == 42 in the
following example, with x and y again initially zero:
r1 = x.load(memory_order::relaxed);
if (r1 == 42) y.store(42, memory_order::relaxed);
r2 = y.load(memory_order::relaxed);
if (r2 == 42) x.store(42, memory_order::relaxed);
 β end note]
Atomic read-modify-write operations shall always read the last value
(in the modification order) written before the write associated with
the read-modify-write operation
. [
Note 7: 
The intent is for atomic modify-write operations
to be implemented using mechanisms that are not ordered, in hardware,
by the implementation of acquire fences
.No other semantic or hardware property
(e.g., that the mechanism is a far atomic operation) is implied
. β 
end note]
 Recommended practice: The implementation should make atomic stores visible to atomic loads,
and atomic loads should observe atomic stores,
within a reasonable amount of time
. #define ATOMIC_BOOL_LOCK_FREE unspecified
#define ATOMIC_CHAR_LOCK_FREE unspecified
#define ATOMIC_CHAR8_T_LOCK_FREE unspecified
#define ATOMIC_CHAR16_T_LOCK_FREE unspecified
#define ATOMIC_CHAR32_T_LOCK_FREE unspecified
#define ATOMIC_WCHAR_T_LOCK_FREE unspecified
#define ATOMIC_SHORT_LOCK_FREE unspecified
#define ATOMIC_INT_LOCK_FREE unspecified
#define ATOMIC_LONG_LOCK_FREE unspecified
#define ATOMIC_LLONG_LOCK_FREE unspecified
#define ATOMIC_POINTER_LOCK_FREE unspecified
 The 
ATOMIC_..._LOCK_FREE macros indicate the lock-free property of the
corresponding atomic types, with the signed and unsigned variants grouped
together
.The properties also apply to the corresponding (partial) specializations of the
atomic template
.A value of 0 indicates that the types are never
lock-free
.A value of 1 indicates that the types are sometimes lock-free
.A
value of 2 indicates that the types are always lock-free
.On a hosted implementation (
[compliance]),
at least one signed integral specialization of the 
atomic template,
along with the specialization
for the corresponding unsigned type (
[basic.fundamental]),
is always lock-free
. In any given program execution, the
result of the lock-free query
is the same for all atomic objects of the same type
. Atomic operations that are not lock-free are considered to potentially
block (
[intro.progress])
.Recommended practice: Operations that are lock-free should also be address-free
.  
The implementation of these operations should not depend on any per-process state
.[
Note 1: 
This restriction enables communication by memory that is
mapped into a process more than once and by memory that is shared between two
processes
. β 
end note]
 An atomic waiting operation may block until it is unblocked
by an atomic notifying operation, according to each function's effects
.[
Note 1: 
Programs are not guaranteed to observe transient atomic values,
an issue known as the A-B-A problem,
resulting in continued blocking if a condition is only temporarily met
. β 
end note]
 [
Note 2: 
The following functions are atomic waiting operations:
- atomic<T>::wait,
- atomic_flag::wait,
- atomic_wait and atomic_wait_explicit,
- atomic_flag_wait and atomic_flag_wait_explicit, and
- atomic_ref<T>::wait.
 β 
end note]
[
Note 3: 
The following functions are atomic notifying operations:
- atomic<T>::notify_one and atomic<T>::notify_all,
- atomic_flag::notify_one and atomic_flag::notify_all,
- atomic_notify_one and atomic_notify_all,
- atomic_flag_notify_one and atomic_flag_notify_all, and
- atomic_ref<T>::notify_one and atomic_ref<T>::notify_all.
 β 
end note]
A call to an atomic waiting operation on an atomic object 
M
is 
eligible to be unblocked
by a call to an atomic notifying operation on 
M
if there exist side effects 
X and 
Y on 
M such that:
- the atomic waiting operation has blocked after observing the result of X,
- X precedes Y in the modification order of M, and
- Y happens before the call to the atomic notifying operation.
namespace std {
  template<class T> struct atomic_ref {
  private:
    T* ptr;             
  public:
    using value_type = remove_cv_t<T>;
    static constexpr size_t required_alignment = implementation-defined;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const noexcept;
    constexpr explicit atomic_ref(T&);
    constexpr atomic_ref(const atomic_ref&) noexcept;
    atomic_ref& operator=(const atomic_ref&) = delete;
    constexpr void store(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type operator=(value_type) const noexcept;
    constexpr value_type load(memory_order = memory_order::seq_cst) const noexcept;
    constexpr operator value_type() const noexcept;
    constexpr value_type exchange(value_type,
                                  memory_order = memory_order::seq_cst) const noexcept;
    constexpr bool compare_exchange_weak(value_type&, value_type,
                                         memory_order, memory_order) const noexcept;
    constexpr bool compare_exchange_strong(value_type&, value_type,
                                           memory_order, memory_order) const noexcept;
    constexpr bool compare_exchange_weak(value_type&, value_type,
                                         memory_order = memory_order::seq_cst) const noexcept;
    constexpr bool compare_exchange_strong(value_type&, value_type,
                                           memory_order = memory_order::seq_cst) const noexcept;
    constexpr void wait(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr void notify_one() const noexcept;
    constexpr void notify_all() const noexcept;
    constexpr T* address() const noexcept;
  };
}
 An 
atomic_ref object applies atomic operations (
[atomics.general]) to
the object referenced by 
*ptr such that,
for the lifetime (
[basic.life]) of the 
atomic_ref object,
the object referenced by 
*ptr is an atomic object (
[intro.races])
.The program is ill-formed if 
is_trivially_copyable_v<T> is 
false.The lifetime (
[basic.life]) of an object referenced by 
*ptr
shall exceed the lifetime of all 
atomic_refs that reference the object
.While any 
atomic_ref instances exist
that reference the 
*ptr object,
all accesses to that object shall exclusively occur
through those 
atomic_ref instances
.No subobject of the object referenced by 
atomic_ref
shall be concurrently referenced by any other 
atomic_ref object
.Atomic operations applied to an object
through a referencing 
atomic_ref are atomic with respect to
atomic operations applied through any other 
atomic_ref
referencing the same object
.[
Note 1: 
Atomic operations or the 
atomic_ref constructor can acquire
a shared resource, such as a lock associated with the referenced object,
to enable atomic operations to be applied to the referenced object
. β 
end note]
The program is ill-formed
if 
is_always_lock_free is 
false and
is_volatile_v<T> is 
true.static constexpr size_t required_alignment;
The alignment required for an object to be referenced by an atomic reference,
which is at least 
alignof(T).[
Note 1: 
Hardware could require an object
referenced by an 
atomic_ref
to have stricter alignment (
[basic.align])
than other objects of type 
T.Further, whether operations on an 
atomic_ref
are lock-free could depend on the alignment of the referenced object
.For example, lock-free operations on 
std::complex<double>
could be supported only if aligned to 
2*alignof(double). β 
end note]
static constexpr bool is_always_lock_free;
The static data member 
is_always_lock_free is 
true
if the 
atomic_ref type's operations are always lock-free,
and 
false otherwise
.bool is_lock_free() const noexcept;
Returns: 
true if operations on all objects of the type 
atomic_ref<T>
are lock-free,
false otherwise
. constexpr atomic_ref(T& obj);
Preconditions: The referenced object is aligned to 
required_alignment. Postconditions: 
*this references 
obj. constexpr atomic_ref(const atomic_ref& ref) noexcept;
Postconditions: 
*this references the object referenced by 
ref. constexpr void store(value_type desired,
                     memory_order order = memory_order::seq_cst) const noexcept;
Constraints: 
is_const_v<T> is 
false. Preconditions: 
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value referenced by 
*ptr
with the value of 
desired.  Memory is affected according to the value of 
order.constexpr value_type operator=(value_type desired) const noexcept;
Constraints: 
is_const_v<T> is 
false. Effects: Equivalent to:
store(desired);
return desired;
constexpr value_type load(memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of 
order. Returns: Atomically returns the value referenced by 
*ptr. constexpr operator value_type() const noexcept;
Effects: Equivalent to: return load();
constexpr value_type exchange(value_type desired,
                              memory_order order = memory_order::seq_cst) const noexcept;
Constraints: 
is_const_v<T> is 
false. Effects: Atomically replaces the value referenced by 
*ptr
with 
desired.  Memory is affected according to the value of 
order.Returns: Atomically returns the value referenced by 
*ptr
immediately before the effects
. constexpr bool compare_exchange_weak(value_type& expected, value_type desired,
                           memory_order success, memory_order failure) const noexcept;
constexpr bool compare_exchange_strong(value_type& expected, value_type desired,
                             memory_order success, memory_order failure) const noexcept;
constexpr bool compare_exchange_weak(value_type& expected, value_type desired,
                           memory_order order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_strong(value_type& expected, value_type desired,
                             memory_order order = memory_order::seq_cst) const noexcept;
Constraints: 
is_const_v<T> is 
false. Preconditions: 
failure is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Retrieves the value in 
expected.  It then atomically compares the value representation of
the value referenced by 
*ptr for equality
with that previously retrieved from 
expected,
and if 
true, replaces the value referenced by 
*ptr
with that in 
desired.If and only if the comparison is 
true,
memory is affected according to the value of 
success, and
if the comparison is 
false,
memory is affected according to the value of 
failure.When only one 
memory_order argument is supplied,
the value of 
success is 
order, and
the value of 
failure is 
order
except that a value of 
memory_order::acq_rel shall be replaced by
the value 
memory_order::acquire and
a value of 
memory_order::release shall be replaced by
the value 
memory_order::relaxed.If and only if the comparison is 
false then,
after the atomic operation,
the value in 
expected is replaced by
the value read from the value referenced by 
*ptr
during the atomic comparison
.If the operation returns 
true,
these operations are atomic read-modify-write operations (
[intro.races])
on the value referenced by 
*ptr.Otherwise, these operations are atomic load operations on that memory
.Returns: The result of the comparison
. Remarks: A weak compare-and-exchange operation may fail spuriously
.  That is, even when the contents of memory referred to
by 
expected and 
ptr are equal,
it may return 
false and
store back to 
expected the same memory contents
that were originally there
.[
Note 2: 
This spurious failure enables implementation of compare-and-exchange
on a broader class of machines, e.g., load-locked store-conditional machines
.A consequence of spurious failure is
that nearly all uses of weak compare-and-exchange will be in a loop
.When a compare-and-exchange is in a loop,
the weak version will yield better performance on some platforms
.When a weak compare-and-exchange would require a loop and
a strong one would not, the strong one is preferable
. β 
end note]
constexpr void wait(value_type old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
- Evaluates  load(order)-  and
  compares its value representation for equality against that of  old.
- If they compare unequal, returns .
- Blocks until it
  is unblocked by an atomic notifying operation or is unblocked spuriously .
 Remarks: This function is an atomic waiting operation (
[atomics.wait])
on atomic object 
*ptr. constexpr void notify_one() const noexcept;
Constraints: 
is_const_v<T> is 
false. Effects: Unblocks the execution of at least one atomic waiting operation on 
*ptr
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. Remarks: This function is an atomic notifying operation (
[atomics.wait])
on atomic object 
*ptr. constexpr void notify_all() const noexcept;
Constraints: 
is_const_v<T> is 
false. Effects: Unblocks the execution of all atomic waiting operations on 
*ptr
that are eligible to be unblocked (
[atomics.wait]) by this call
. Remarks: This function is an atomic notifying operation (
[atomics.wait])
on atomic object 
*ptr. constexpr T* address() const noexcept;
There are specializations of the 
atomic_ref class template
for all integral types except 
cv bool.For each such type 
integral-type,
the specialization 
atomic_ref<integral-type> provides
additional atomic operations appropriate to integral types
.The program is ill-formed
if 
is_always_lock_free is 
false and
is_volatile_v<integral-type> is 
true.namespace std {
  template<> struct atomic_ref<integral-type> {
  private:
    integral-type* ptr;         
  public:
    using value_type = remove_cv_t<integral-type>;
    using difference_type = value_type;
    static constexpr size_t required_alignment = implementation-defined;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const noexcept;
    constexpr explicit atomic_ref(integral-type&);
    constexpr atomic_ref(const atomic_ref&) noexcept;
    atomic_ref& operator=(const atomic_ref&) = delete;
    constexpr void store(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type operator=(value_type) const noexcept;
    constexpr value_type load(memory_order = memory_order::seq_cst) const noexcept;
    constexpr operator value_type() const noexcept;
    constexpr value_type exchange(value_type,
                                  memory_order = memory_order::seq_cst) const noexcept;
    constexpr bool compare_exchange_weak(value_type&, value_type,
                                         memory_order, memory_order) const noexcept;
    constexpr bool compare_exchange_strong(value_type&, value_type,
                                           memory_order, memory_order) const noexcept;
    constexpr bool compare_exchange_weak(value_type&, value_type,
                                         memory_order = memory_order::seq_cst) const noexcept;
    constexpr bool compare_exchange_strong(value_type&, value_type,
                                           memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_add(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_sub(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_and(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_or(value_type,
                                  memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_xor(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_max(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_min(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_add(value_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_sub(value_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_and(value_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_or(value_type,
                            memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_xor(value_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_max(value_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_min(value_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type operator++(int) const noexcept;
    constexpr value_type operator--(int) const noexcept;
    constexpr value_type operator++() const noexcept;
    constexpr value_type operator--() const noexcept;
    constexpr value_type operator+=(value_type) const noexcept;
    constexpr value_type operator-=(value_type) const noexcept;
    constexpr value_type operator&=(value_type) const noexcept;
    constexpr value_type operator|=(value_type) const noexcept;
    constexpr value_type operator^=(value_type) const noexcept;
    constexpr void wait(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr void notify_one() const noexcept;
    constexpr void notify_all() const noexcept;
    constexpr integral-type* address() const noexcept;
  };
}
 Descriptions are provided below only for members
that differ from the primary template
.The following operations perform arithmetic computations
.The correspondence among key, operator, and computation is specified
in Table 
155.constexpr value_type fetch_key(value_type operand,
  memory_order order = memory_order::seq_cst) const noexcept;
Constraints: 
is_const_v<integral-type> is 
false. Effects: Atomically replaces the value referenced by 
*ptr with
the result of the computation applied to the value referenced by 
*ptr
and the given operand
.  Memory is affected according to the value of 
order.These operations are atomic read-modify-write operations (
[intro.races])
.Returns: Atomically, the value referenced by 
*ptr
immediately before the effects
. Remarks: Except for 
fetch_max and 
fetch_min, for signed integer types
the result is as if the object value and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type
.  [
Note 2: 
There are no undefined results arising from the computation
. β 
end note]
For 
fetch_max and 
fetch_min, the maximum and minimum
computation is performed as if by 
max and 
min
algorithms (
[alg.min.max]), respectively,
with the object value and the first parameter as the arguments
.constexpr void store_key(value_type operand,
                         memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: 
order is 
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value referenced by 
*ptr
with the result of the computation applied to
the value referenced by 
*ptr and the given 
operand.  Memory is affected according to the value of 
order.Remarks: Except for 
store_max and 
store_min,
for signed integer types,
the result is as if 
*ptr and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type
.  [
Note 3: 
There are no undefined results arising from the computation
. β 
end note]
For 
store_max and 
store_min,
the maximum and minimum computation is performed
as if by 
max and 
min algorithms (
[alg.min.max]), respectively,
with 
*ptr and the first parameter as the arguments
.constexpr value_type operator op=(value_type operand) const noexcept;
Constraints: 
is_const_v<integral-type> is 
false. Effects: Equivalent to:
return fetch_key(operand) op operand;
There are specializations of the 
atomic_ref class template
for all floating-point types
.For each such type 
floating-point-type,
the specialization 
atomic_ref<floating-point-type> provides
additional atomic operations appropriate to floating-point types
.The program is ill-formed
if 
is_always_lock_free is 
false and
is_volatile_v<floating-point-type> is 
true.namespace std {
  template<> struct atomic_ref<floating-point-type> {
  private:
    floating-point-type* ptr;   
  public:
    using value_type = remove_cv_t<floating-point-type>;
    using difference_type = value_type;
    static constexpr size_t required_alignment = implementation-defined;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const noexcept;
    constexpr explicit atomic_ref(floating-point-type&);
    constexpr atomic_ref(const atomic_ref&) noexcept;
    atomic_ref& operator=(const atomic_ref&) = delete;
    constexpr void store(value_type,
                         memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type operator=(value_type) const noexcept;
    constexpr value_type load(memory_order = memory_order::seq_cst) const noexcept;
    constexpr operator value_type() const noexcept;
    constexpr value_type exchange(value_type,
                                  memory_order = memory_order::seq_cst) const noexcept;
    constexpr bool compare_exchange_weak(value_type&, value_type,
                                         memory_order, memory_order) const noexcept;
    constexpr bool compare_exchange_strong(value_type&, value_type,
                                           memory_order, memory_order) const noexcept;
    constexpr bool compare_exchange_weak(value_type&, value_type,
                                         memory_order = memory_order::seq_cst) const noexcept;
    constexpr bool compare_exchange_strong(value_type&, value_type,
                                           memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_add(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_sub(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_max(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_min(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_fmaximum(value_type,
                                        memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_fminimum(value_type,
                                        memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_fmaximum_num(value_type,
                                            memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_fminimum_num(value_type,
                                            memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_add(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_sub(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_max(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_min(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_fmaximum(value_type,
                                  memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_fminimum(value_type,
                                  memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_fmaximum_num(value_type,
                                      memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_fminimum_num(value_type,
                                      memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type operator+=(value_type) const noexcept;
    constexpr value_type operator-=(value_type) const noexcept;
    constexpr void wait(value_type,
                        memory_order = memory_order::seq_cst) const noexcept;
    constexpr void notify_one() const noexcept;
    constexpr void notify_all() const noexcept;
    constexpr floating-point-type* address() const noexcept;
  };
}
 Descriptions are provided below only for members
that differ from the primary template
.The following operations perform arithmetic computations
.The correspondence among key, operator, and computation is specified
in Table 
155,
except for the keys
max,
min,
fmaximum,
fminimum,
fmaximum_num, and
fminimum_num,
which are specified below
.constexpr value_type fetch_key(value_type operand,
                          memory_order order = memory_order::seq_cst) const noexcept;
Constraints: 
is_const_v<floating-point-type> is 
false. Effects: Atomically replaces the value referenced by 
*ptr with
the result of the computation applied to the value referenced by 
*ptr
and the given operand
.  Memory is affected according to the value of 
order.These operations are atomic read-modify-write operations (
[intro.races])
.Returns: Atomically, the value referenced by 
*ptr
immediately before the effects
. Remarks: If the result is not a representable value for its type (
[expr.pre]),
the result is unspecified,
but the operations otherwise have no undefined behavior
.  Atomic arithmetic operations on 
floating-point-type should conform to
the 
std::numeric_limits<value_type> traits
associated with the floating-point type (
[limits.syn])
.The floating-point environment (
[cfenv])
for atomic arithmetic operations on 
floating-point-type
may be different than the calling thread's floating-point environment
.- For  fetch_fmaximum-  and  fetch_fminimum- ,
the maximum and minimum computation is performed
as if by  fmaximum-  and  fminimum- , respectively,
with  *ptr-  and the first parameter as the arguments .
- For  fetch_fmaximum_num-  and  fetch_fminimum_num- ,
the maximum and minimum computation is performed
as if by  fmaximum_num-  and  fminimum_num- , respectively,
with  *ptr-  and the first parameter as the arguments .
- For  fetch_max-  and  fetch_min- ,
the maximum and minimum computation is performed
as if by  fmaximum_num-  and  fminimum_num- , respectively,
with  *ptr-  and the first parameter as the arguments, except that:
 - If both arguments are NaN, an unspecified NaN value is stored at  *ptr.
- If exactly one argument is a NaN,
either the other argument or an unspecified NaN value is stored at  *ptr- ;
it is unspecified which .
- If the arguments are differently signed zeros,
which of these values is stored at  *ptr-  is unspecified .
 
Recommended practice: The implementation of 
fetch_max and 
fetch_min
should treat negative zero as smaller than positive zero
. constexpr void store_key(value_type operand,
                          memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: 
order is 
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value referenced by 
*ptr
with the result of the computation applied to
the value referenced by 
*ptr and the given 
operand.  Memory is affected according to the value of 
order.Remarks: If the result is not a representable value for its type (
[expr.pre]),
the result is unspecified,
but the operations otherwise have no undefined behavior
.  Atomic arithmetic operations on 
floating-point-type
should conform to the 
numeric_limits<floating-point-type>
traits associated with the floating-point type (
[limits.syn])
.The floating-point environment (
[cfenv])
for atomic arithmetic operations on 
floating-point-type
may be different than the calling thread's floating-point environment
.The arithmetic rules of floating-point atomic modify-write operations
may be different from operations on floating-point types or
atomic floating-point types
.[
Note 1: 
Tree reductions are permitted for atomic modify-write operations
. β 
end note]
- For  store_fmaximum-  and  store_fminimum- ,
the maximum and minimum computation is performed
as if by  fmaximum-  and  fminimum- , respectively,
with  *ptr-  and the first parameter as the arguments .
- For  store_fmaximum_num-  and  store_fminimum_num- ,
the maximum and minimum computation is performed
as if by  fmaximum_num- and  fminimum_num- , respectively,
with  *ptr-  and the first parameter as the arguments .
- For  store_max-  and  store_min- ,
the maximum and minimum computation is performed
as if by  fmaximum_num-  and  fminimum_num- , respectively,
with  *ptr-  and the first parameter as the arguments, except that:
 - If both arguments are NaN, an unspecified NaN value is stored at  *ptr.
- If exactly one argument is a NaN,
either the other argument or an unspecified NaN value is stored at  *ptr- ,
it is unspecified which .
- If the arguments are differently signed zeros,
which of these values is stored at  *ptr-  is unspecified .
 
Recommended practice: The implementation of 
store_max and 
store_min
should treat negative zero as smaller than positive zero
. constexpr value_type operator op=(value_type operand) const noexcept;
Constraints: 
is_const_v<floating-point-type> is 
false. Effects: Equivalent to:
return fetch_key(operand) op operand;
There are specializations of the 
atomic_ref class template
for all pointer-to-object types
.For each such type 
pointer-type,
the specialization 
atomic_ref<pointer-type> provides
additional atomic operations appropriate to pointer types
.The program is ill-formed
if 
is_always_lock_free is 
false and
is_volatile_v<pointer-type> is 
true.namespace std {
  template<> struct atomic_ref<pointer-type> {
  private:
    pointer-type* ptr;        
  public:
    using value_type = remove_cv_t<pointer-type>;
    using difference_type = ptrdiff_t;
    static constexpr size_t required_alignment = implementation-defined;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const noexcept;
    constexpr explicit atomic_ref(pointer-type&);
    constexpr atomic_ref(const atomic_ref&) noexcept;
    atomic_ref& operator=(const atomic_ref&) = delete;
    constexpr void store(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type operator=(value_type) const noexcept;
    constexpr value_type load(memory_order = memory_order::seq_cst) const noexcept;
    constexpr operator value_type() const noexcept;
    constexpr value_type exchange(value_type,
                                  memory_order = memory_order::seq_cst) const noexcept;
    constexpr bool compare_exchange_weak(value_type&, value_type,
                                         memory_order, memory_order) const noexcept;
    constexpr bool compare_exchange_strong(value_type&, value_type,
                                           memory_order, memory_order) const noexcept;
    constexpr bool compare_exchange_weak(value_type&, value_type,
                                         memory_order = memory_order::seq_cst) const noexcept;
    constexpr bool compare_exchange_strong(value_type&, value_type,
                                           memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_add(difference_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_sub(difference_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_max(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type fetch_min(value_type,
                                   memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_add(difference_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_sub(difference_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_max(value_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr void store_min(value_type,
                             memory_order = memory_order::seq_cst) const noexcept;
    constexpr value_type operator++(int) const noexcept;
    constexpr value_type operator--(int) const noexcept;
    constexpr value_type operator++() const noexcept;
    constexpr value_type operator--() const noexcept;
    constexpr value_type operator+=(difference_type) const noexcept;
    constexpr value_type operator-=(difference_type) const noexcept;
    constexpr void wait(value_type, memory_order = memory_order::seq_cst) const noexcept;
    constexpr void notify_one() const noexcept;
    constexpr void notify_all() const noexcept;
    constexpr pointer-type* address() const noexcept;
  };
}
 Descriptions are provided below only for members
that differ from the primary template
.The following operations perform arithmetic computations
.The correspondence among key, operator, and computation is specified
in Table 
156.constexpr value_type fetch_key(difference_type operand,
                     memory_order order = memory_order::seq_cst) const noexcept;
Constraints: 
is_const_v<pointer-type> is 
false. Mandates: 
remove_pointer_t<pointer-type> is a complete object type
. Effects: Atomically replaces the value referenced by 
*ptr with
the result of the computation applied to the value referenced by 
*ptr
and the given operand
.  Memory is affected according to the value of 
order.These operations are atomic read-modify-write operations (
[intro.races])
.Returns: Atomically, the value referenced by 
*ptr
immediately before the effects
. Remarks: The result may be an undefined address,
but the operations otherwise have no undefined behavior
. For 
fetch_max and 
fetch_min, the maximum and minimum
computation is performed as if by 
max and 
min
algorithms (
[alg.min.max]), respectively, with the object value and the first
parameter as the arguments
.[
Note 1: 
If the pointers point to different complete objects (or subobjects thereof),
the 
< operator does not establish a strict weak ordering
(Table 
29, 
[expr.rel])
. β 
end note]
constexpr void store_key(see above operand,
                     memory_order order = memory_order::seq_cst) const noexcept;
Mandates: 
T is a complete object type
. Preconditions: 
order is 
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value referenced by 
*ptr
with the result of the computation applied to
the value referenced by 
*ptr and the given 
operand.  Memory is affected according to the value of 
order.Remarks: The result may be an undefined address,
but the operations otherwise have no undefined behavior
.  For 
store_max and 
store_min,
the 
maximum and 
minimum computation is performed
as if by 
max and 
min algorithms (
[alg.min.max]), respectively,
with 
*ptr and the first parameter as the arguments
.[
Note 2: 
If the pointers point to different complete objects (or subobjects thereof),
the 
< operator does not establish
a strict weak ordering (Table 
29, 
[expr.rel])
. β 
end note]
constexpr value_type operator op=(difference_type operand) const noexcept;
Constraints: 
is_const_v<pointer-type> is 
false. Effects: Equivalent to:
return fetch_key(operand) op operand;
constexpr value_type operator++(int) const noexcept;
Constraints: 
is_const_v<referred-type> is 
false. Effects: Equivalent to: return fetch_add(1);
constexpr value_type operator--(int) const noexcept;
Constraints: 
is_const_v<referred-type> is 
false. Effects: Equivalent to: return fetch_sub(1);
constexpr value_type operator++() const noexcept;
Constraints: 
is_const_v<referred-type> is 
false. Effects: Equivalent to: return fetch_add(1) + 1;
constexpr value_type operator--() const noexcept;
Constraints: 
is_const_v<referred-type> is 
false. Effects: Equivalent to: return fetch_sub(1) - 1;
namespace std {
  template<class T> struct atomic {
    using value_type = T;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const volatile noexcept;
    bool is_lock_free() const noexcept;
    
    constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
    constexpr atomic(T) noexcept;
    atomic(const atomic&) = delete;
    atomic& operator=(const atomic&) = delete;
    atomic& operator=(const atomic&) volatile = delete;
    T load(memory_order = memory_order::seq_cst) const volatile noexcept;
    constexpr T load(memory_order = memory_order::seq_cst) const noexcept;
    operator T() const volatile noexcept;
    constexpr operator T() const noexcept;
    void store(T, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store(T, memory_order = memory_order::seq_cst) noexcept;
    T operator=(T) volatile noexcept;
    constexpr T operator=(T) noexcept;
    T exchange(T, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr T exchange(T, memory_order = memory_order::seq_cst) noexcept;
    bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept;
    constexpr bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept;
    bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept;
    constexpr bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept;
    bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) noexcept;
    bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) noexcept;
    void wait(T, memory_order = memory_order::seq_cst) const volatile noexcept;
    constexpr void wait(T, memory_order = memory_order::seq_cst) const noexcept;
    void notify_one() volatile noexcept;
    constexpr void notify_one() noexcept;
    void notify_all() volatile noexcept;
    constexpr void notify_all() noexcept;
  };
}
  The program is ill-formed if any of
- is_trivially_copyable_v<T>,
- is_copy_constructible_v<T>,
- is_move_constructible_v<T>,
- is_copy_assignable_v<T>,
- is_move_assignable_v<T>, or
- same_as<T, remove_cv_t<T>>,
is 
false.[
Note 1: 
Type arguments that are
not also statically initializable can be difficult to use
. β 
end note]
 The specialization 
atomic<bool> is a standard-layout struct
.It has a trivial destructor
.[
Note 2: 
The representation of an atomic specialization
need not have the same size and alignment requirement as
its corresponding argument type
. β 
end note]
constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
Constraints: 
is_default_constructible_v<T> is 
true. Effects: Initializes the atomic object with the value of 
T().  constexpr atomic(T desired) noexcept;
Effects: Initializes the object with the value 
desired.   [
Note 1: 
It is possible to have an access to an atomic object 
A
race with its construction, for example by communicating the address of the
just-constructed object 
A to another thread via
memory_order::relaxed operations on a suitable atomic pointer
variable, and then immediately accessing 
A in the receiving thread
.This results in undefined behavior
. β 
end note]
static constexpr bool is_always_lock_free = implementation-defined;
The 
static data member 
is_always_lock_free is 
true
if the atomic type's operations are always lock-free, and 
false otherwise
.[
Note 2: 
The value of 
is_always_lock_free is consistent with the value of
the corresponding 
ATOMIC_..._LOCK_FREE macro, if defined
. β 
end note]
bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
Returns: 
true if the object's operations are lock-free, 
false otherwise
.  [
Note 3: 
The return value of the 
is_lock_free member function
is consistent with the value of 
is_always_lock_free for the same type
. β 
end note]
void store(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store(T desired, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Preconditions: 
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value pointed to by 
this
with the value of 
desired.  Memory is affected according to the value of
order.T operator=(T desired) volatile noexcept;
constexpr T operator=(T desired) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Equivalent to 
store(desired). T load(memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr T load(memory_order order = memory_order::seq_cst) const noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of 
order. Returns: Atomically returns the value pointed to by 
this. operator T() const volatile noexcept;
constexpr operator T() const noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Equivalent to: return load();
T exchange(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T exchange(T desired, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Atomically replaces the value pointed to by 
this
with 
desired.  Memory is affected according to the value of 
order.Returns: Atomically returns the value pointed to by 
this immediately before the effects
. bool compare_exchange_weak(T& expected, T desired,
                           memory_order success, memory_order failure) volatile noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
                           memory_order success, memory_order failure) noexcept;
bool compare_exchange_strong(T& expected, T desired,
                             memory_order success, memory_order failure) volatile noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
                             memory_order success, memory_order failure) noexcept;
bool compare_exchange_weak(T& expected, T desired,
                           memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
                           memory_order order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(T& expected, T desired,
                             memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
                             memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Preconditions: 
failure is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Retrieves the value in 
expected.  It then atomically
compares the value representation of the value pointed to by 
this
for equality with that previously retrieved from 
expected,
and if true, replaces the value pointed to
by 
this with that in 
desired.If and only if the comparison is 
true, memory is affected according to the
value of 
success, and if the comparison is false, memory is affected according
to the value of 
failure.When only one 
memory_order argument is
supplied, the value of 
success is 
order, and the value of
failure is 
order except that a value of 
memory_order::acq_rel
shall be replaced by the value 
memory_order::acquire and a value of
memory_order::release shall be replaced by the value
memory_order::relaxed.If and only if the comparison is false then, after the atomic operation,
the value in 
expected is replaced by the value
pointed to by 
this during the atomic comparison
.If the operation returns 
true, these
operations are atomic read-modify-write
operations (
[intro.multithread]) on the memory
pointed to by 
this.Otherwise, these operations are atomic load operations on that memory
.Returns: The result of the comparison
. [
Note 4: 
For example, the effect of
compare_exchange_strong
on objects without padding bits (
[basic.types.general]) is
if (memcmp(this, &expected, sizeof(*this)) == 0)
  memcpy(this, &desired, sizeof(*this));
else
  memcpy(&expected, this, sizeof(*this));
 β 
end note]
[
Example 1: 
The expected use of the compare-and-exchange operations is as follows
.The
compare-and-exchange operations will update 
expected when another iteration of
the loop is needed
. β 
end example]
[
Example 2: 
Because the expected value is updated only on failure,
code releasing the memory containing the 
expected value on success will work
.For example, list head insertion will act atomically and would not introduce a
data race in the following code:
do {
  p->next = head;                                   
} while (!head.compare_exchange_weak(p->next, p));  
 β 
end example]
Implementations should ensure that weak compare-and-exchange operations do not
consistently return 
false unless either the atomic object has value
different from 
expected or there are concurrent modifications to the
atomic object
.Remarks: A weak compare-and-exchange operation may fail spuriously
.  That is, even when
the contents of memory referred to by 
expected and 
this are
equal, it may return 
false and store back to 
expected the same memory
contents that were originally there
.[
Note 5: 
This
spurious failure enables implementation of compare-and-exchange on a broader class of
machines, e.g., load-locked store-conditional machines
.A
consequence of spurious failure is that nearly all uses of weak compare-and-exchange
will be in a loop
.When a compare-and-exchange is in a loop, the weak version will yield better performance
on some platforms
.When a weak compare-and-exchange would require a loop and a strong one
would not, the strong one is preferable
. β 
end note]
[
Note 6: 
Under cases where the 
memcpy and 
memcmp semantics of the compare-and-exchange
operations apply, the comparisons can fail for values that compare equal with
operator== if the value representation has trap bits or alternate
representations of the same value
.Notably, on implementations conforming to
ISO/IEC 60559, floating-point 
-0.0 and 
+0.0
will not compare equal with 
memcmp but will compare equal with 
operator==,
and NaNs with the same payload will compare equal with 
memcmp but will not
compare equal with 
operator==. β 
end note]
[
Note 7: 
Because compare-and-exchange acts on an object's value representation,
padding bits that never participate in the object's value representation
are ignored
.As a consequence, the following code is guaranteed to avoid
spurious failure:
struct padded {
  char clank = 0x42;
  
  unsigned biff = 0xC0DEFEFE;
};
atomic<padded> pad = {};
bool zap() {
  padded expected, desired{0, 0};
  return pad.compare_exchange_strong(expected, desired);
}
 β 
end note]
[
Note 8: 
For a union with bits that participate in the value representation
of some members but not others, compare-and-exchange might always fail
.This is because such padding bits have an indeterminate value when they
do not participate in the value representation of the active member
.As a consequence, the following code is not guaranteed to ever succeed:
union pony {
  double celestia = 0.;
  short luna;       
};
atomic<pony> princesses = {};
bool party(pony desired) {
  pony expected;
  return princesses.compare_exchange_strong(expected, desired);
}
 β 
end note]
void wait(T old, memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
- Evaluates  load(order)-  and
  compares its value representation for equality against that of  old.
- If they compare unequal, returns .
- Blocks until it
  is unblocked by an atomic notifying operation or is unblocked spuriously .
 void notify_one() volatile noexcept;
constexpr void notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. void notify_all() volatile noexcept;
constexpr void notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked (
[atomics.wait]) by this call
. There are specializations of the 
atomic
class template for the integral types
char,
signed char,
unsigned char,
short,
unsigned short,
int,
unsigned int,
long,
unsigned long,
long long,
unsigned long long,
char8_t,
char16_t,
char32_t,
wchar_t,
and any other types needed by the typedefs in the header 
.For each such type 
integral-type, the specialization
atomic<integral-type> provides additional atomic operations appropriate to integral types
.namespace std {
  template<> struct atomic<integral-type> {
    using value_type = integral-type;
    using difference_type = value_type;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const volatile noexcept;
    bool is_lock_free() const noexcept;
    constexpr atomic() noexcept;
    constexpr atomic(integral-type) noexcept;
    atomic(const atomic&) = delete;
    atomic& operator=(const atomic&) = delete;
    atomic& operator=(const atomic&) volatile = delete;
    void store(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store(integral-type, memory_order = memory_order::seq_cst) noexcept;
    integral-type operator=(integral-type) volatile noexcept;
    constexpr integral-type operator=(integral-type) noexcept;
    integral-type load(memory_order = memory_order::seq_cst) const volatile noexcept;
    constexpr integral-type load(memory_order = memory_order::seq_cst) const noexcept;
    operator integral-type() const volatile noexcept;
    constexpr operator integral-type() const noexcept;
    integral-type exchange(integral-type,
                           memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr integral-type exchange(integral-type,
                           memory_order = memory_order::seq_cst) noexcept;
    bool compare_exchange_weak(integral-type&, integral-type,
                               memory_order, memory_order) volatile noexcept;
    constexpr bool compare_exchange_weak(integral-type&, integral-type,
                               memory_order, memory_order) noexcept;
    bool compare_exchange_strong(integral-type&, integral-type,
                                 memory_order, memory_order) volatile noexcept;
    constexpr bool compare_exchange_strong(integral-type&, integral-type,
                                 memory_order, memory_order) noexcept;
    bool compare_exchange_weak(integral-type&, integral-type,
                               memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr bool compare_exchange_weak(integral-type&, integral-type,
                               memory_order = memory_order::seq_cst) noexcept;
    bool compare_exchange_strong(integral-type&, integral-type,
                                 memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr bool compare_exchange_strong(integral-type&, integral-type,
                                 memory_order = memory_order::seq_cst) noexcept;
    integral-type fetch_add(integral-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr integral-type fetch_add(integral-type,
                            memory_order = memory_order::seq_cst) noexcept;
    integral-type fetch_sub(integral-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr integral-type fetch_sub(integral-type,
                            memory_order = memory_order::seq_cst) noexcept;
    integral-type fetch_and(integral-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr integral-type fetch_and(integral-type,
                            memory_order = memory_order::seq_cst) noexcept;
    integral-type fetch_or(integral-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr integral-type fetch_or(integral-type,
                            memory_order = memory_order::seq_cst) noexcept;
    integral-type fetch_xor(integral-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr integral-type fetch_xor(integral-type,
                            memory_order = memory_order::seq_cst) noexcept;
    integral-type fetch_max(integral-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr integral-type fetch_max(integral-type,
                            memory_order = memory_order::seq_cst) noexcept;
    integral-type fetch_min(integral-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr integral-type fetch_min(integral-type,
                            memory_order = memory_order::seq_cst) noexcept;
    void store_add(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_add(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_sub(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_sub(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_and(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_and(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_or(integral-type,
                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_or(integral-type,
                            memory_order = memory_order::seq_cst) noexcept;
    void store_xor(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_xor(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_max(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_max(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_min(integral-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_min(integral-type,
                             memory_order = memory_order::seq_cst) noexcept;
    integral-type operator++(int) volatile noexcept;
    constexpr integral-type operator++(int) noexcept;
    integral-type operator--(int) volatile noexcept;
    constexpr integral-type operator--(int) noexcept;
    integral-type operator++() volatile noexcept;
    constexpr integral-type operator++() noexcept;
    integral-type operator--() volatile noexcept;
    constexpr integral-type operator--() noexcept;
    integral-type operator+=(integral-type) volatile noexcept;
    constexpr integral-type operator+=(integral-type) noexcept;
    integral-type operator-=(integral-type) volatile noexcept;
    constexpr integral-type operator-=(integral-type) noexcept;
    integral-type operator&=(integral-type) volatile noexcept;
    constexpr integral-type operator&=(integral-type) noexcept;
    integral-type operator|=(integral-type) volatile noexcept;
    constexpr integral-type operator|=(integral-type) noexcept;
    integral-type operator^=(integral-type) volatile noexcept;
    constexpr integral-type operator^=(integral-type) noexcept;
    void wait(integral-type, memory_order = memory_order::seq_cst) const volatile noexcept;
    constexpr void wait(integral-type, memory_order = memory_order::seq_cst) const noexcept;
    void notify_one() volatile noexcept;
    constexpr void notify_one() noexcept;
    void notify_all() volatile noexcept;
    constexpr void notify_all() noexcept;
  };
}
 The atomic integral specializations
are standard-layout structs
.They each have
a trivial destructor
.Descriptions are provided below only for members that differ from the primary template
.The following operations perform arithmetic computations
.The correspondence among key, operator, and computation is specified
in Table 
155.integral-type fetch_key(integral-type operand,
                         memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type fetch_key(integral-type operand,
                                   memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Atomically replaces the value pointed to by
this with the result of the computation applied to the
value pointed to by 
this and the given 
operand.  Memory is affected according to the value of 
order.Returns: Atomically, the value pointed to by 
this immediately before the effects
. Remarks: Except for 
fetch_max and 
fetch_min, for signed integer types
the result is as if the object value and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type
.  [
Note 2: 
There are no undefined results arising from the computation
. β 
end note]
For 
fetch_max and 
fetch_min, the maximum and minimum
computation is performed as if by 
max and 
min
algorithms (
[alg.min.max]), respectively,
with the object value and the first parameter as the arguments
.void store_key(integral-type operand,
                memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store_key(integral-type operand,
                          memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Preconditions: 
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value pointed to by 
this
with the result of the computation applied to
the value pointed to by 
this and the given 
operand.  Memory is affected according to the value of 
order.Remarks: Except for 
store_max and 
store_min,
for signed integer types, the result is as if
the value pointed to by 
this and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type
.  [
Note 3: 
There are no undefined results arising from the computation
. β 
end note]
For 
store_max and 
store_min,
the maximum and minimum computation is performed
as if by 
max and 
min algorithms (
[alg.min.max]), respectively,
with the value pointed to by 
this and the first parameter as the arguments
.integral-type operator op=(integral-type operand) volatile noexcept;
constexpr integral-type operator op=(integral-type operand) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Equivalent to: return fetch_key(operand) op operand;
There are specializations of the 
atomic
class template for all cv-unqualified floating-point types
.For each such type 
floating-point-type,
the specialization 
atomic<floating-point-type>
provides additional atomic operations appropriate to floating-point types
.namespace std {
  template<> struct atomic<floating-point-type> {
    using value_type = floating-point-type;
    using difference_type = value_type;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const volatile noexcept;
    bool is_lock_free() const noexcept;
    constexpr atomic() noexcept;
    constexpr atomic(floating-point-type) noexcept;
    atomic(const atomic&) = delete;
    atomic& operator=(const atomic&) = delete;
    atomic& operator=(const atomic&) volatile = delete;
    void store(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store(floating-point-type, memory_order = memory_order::seq_cst) noexcept;
    floating-point-type operator=(floating-point-type) volatile noexcept;
    constexpr floating-point-type operator=(floating-point-type) noexcept;
    floating-point-type load(memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-point-type load(memory_order = memory_order::seq_cst) noexcept;
    operator floating-point-type() volatile noexcept;
    constexpr operator floating-point-type() noexcept;
    floating-point-type exchange(floating-point-type,
                                 memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-point-type exchange(floating-point-type,
                                 memory_order = memory_order::seq_cst) noexcept;
    bool compare_exchange_weak(floating-point-type&, floating-point-type,
                               memory_order, memory_order) volatile noexcept;
    constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type,
                               memory_order, memory_order) noexcept;
    bool compare_exchange_strong(floating-point-type&, floating-point-type,
                                 memory_order, memory_order) volatile noexcept;
    constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type,
                                 memory_order, memory_order) noexcept;
    bool compare_exchange_weak(floating-point-type&, floating-point-type,
                               memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type,
                               memory_order = memory_order::seq_cst) noexcept;
    bool compare_exchange_strong(floating-point-type&, floating-point-type,
                                 memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type,
                                 memory_order = memory_order::seq_cst) noexcept;
    floating-point-type fetch_add(floating-point-type,
                                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-point-type fetch_add(floating-point-type,
                                  memory_order = memory_order::seq_cst) noexcept;
    floating-point-type fetch_sub(floating-point-type,
                                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-point-type fetch_sub(floating-point-type,
                                  memory_order = memory_order::seq_cst) noexcept;
    floating-point-type fetch_max(floating-point-type,
                                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-point-type fetch_max(floating-point-type,
                                  memory_order = memory_order::seq_cst) noexcept;
    floating-point-type fetch_min(floating-point-type,
                                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-poin-typet fetch_min(floating-point-type,
                                  memory_order = memory_order::seq_cst) noexcept;
    floating-point-type fetch_fmaximum(floating-point-type,
                                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-point-type fetch_fmaximum(floating-point-type,
                                  memory_order = memory_order::seq_cst) noexcept;
    floating-point-type fetch_fminimum(floating-point-type,
                                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-point-type fetch_fminimum(floating-point-type,
                                  memory_order = memory_order::seq_cst) noexcept;
    floating-point-type fetch_fmaximum_num(floating-point-type,
                                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-point-type fetch_fmaximum_num(floating-point-type,
                                  memory_order = memory_order::seq_cst) noexcept;
    floating-point-type fetch_fminimum_num(floating-point-type,
                                  memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr floating-point-type fetch_fminimum_num(floating-point-type,
                                  memory_order = memory_order::seq_cst) noexcept;
    void store_add(floating-point-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_add(floating-point-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_sub(floating-point-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_sub(floating-point-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_max(floating-point-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_max(floating-point-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_min(floating-point-type,
                   memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_min(floating-point-type,
                             memory_order = memory_order::seq_cst) noexcept;
    void store_fmaximum(floating-point-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_fmaximum(floating-point-type,
                            memory_order = memory_order::seq_cst) noexcept;
    void store_fminimum(floating-point-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_fminimum(floating-point-type,
                            memory_order = memory_order::seq_cst) noexcept;
    void store_fmaximum_num(floating-point-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_fmaximum_num(floating-point-type,
                            memory_order = memory_order::seq_cst) noexcept;
    void store_fminimum_num(floating-point-type,
                            memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_fminimum_num(floating-point-type,
                            memory_order = memory_order::seq_cst) noexcept;
    floating-point-type operator+=(floating-point-type) volatile noexcept;
    constexpr floating-point-type operator+=(floating-point-type) noexcept;
    floating-point-type operator-=(floating-point-type) volatile noexcept;
    constexpr floating-point-type operator-=(floating-point-type) noexcept;
    void wait(floating-point-type, memory_order = memory_order::seq_cst) const volatile noexcept;
    constexpr void wait(floating-point-type,
                        memory_order = memory_order::seq_cst) const noexcept;
    void notify_one() volatile noexcept;
    constexpr void notify_one() noexcept;
    void notify_all() volatile noexcept;
    constexpr void notify_all() noexcept;
  };
}
 The atomic floating-point specializations
are standard-layout structs
.They each have
a trivial destructor
.Descriptions are provided below only for members that differ from the primary template
.The following operations perform arithmetic addition and subtraction computations
.The correspondence among key, operator, and computation is specified
in Table 
155,
except for the keys
max,
min,
fmaximum,
fminimum,
fmaximum_num, and
fminimum_num,
which are specified below
.floating-point-type fetch_key(floating-point-type operand,
                               memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr floating-point-type fetch_key(floating-point-type operand,
                                         memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Atomically replaces the value pointed to by 
this
with the result of the computation applied to the value pointed
to by 
this and the given 
operand.  Memory is affected according to the value of 
order.Returns: Atomically, the value pointed to by 
this immediately before the effects
. Remarks: If the result is not a representable value for its type (
[expr.pre])
the result is unspecified, but the operations otherwise have no undefined
behavior
.  Atomic arithmetic operations on 
floating-point-type
should conform to the 
std::numeric_limits<floating-point-type>
traits associated with the floating-point type (
[limits.syn])
.The floating-point environment (
[cfenv]) for atomic arithmetic operations
on 
floating-point-type may be different than the
calling thread's floating-point environment
.- For  fetch_fmaximum-  and  fetch_fminimum- ,
the maximum and minimum computation is performed
as if by  fmaximum-  and  fminimum- , respectively,
with the value pointed to by  this-  and the first parameter
as the arguments .
- For  fetch_fmaximum_num-  and  fetch_fminimum_num- ,
the maximum and minimum computation is performed
as if by  fmaximum_num-  and  fminimum_num- , respectively,
with the value pointed to by  this-  and the first parameter
as the arguments .
- For  fetch_max-  and  fetch_min- ,
the maximum and minimum computation is performed
as if by  fmaximum_num-  and  fminimum_num- , respectively,
with the value pointed to by  this-  and the first parameter
as the arguments, except that:
 - If both arguments are NaN,
an unspecified NaN value replaces the value pointed to by  this.
- If exactly one argument is a NaN,
either the other argument or an unspecified NaN value
replaces the value pointed to by  this- ; it is unspecified which .
- If the arguments are differently signed zeros,
which of these values replaces the value pointed to by this is unspecified .
 
Recommended practice: The implementation of 
fetch_max and 
fetch_min
should treat negative zero as smaller than positive zero
. void store_key(floating-point-type operand,
  memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store_key(floating-point-type operand,
  memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Preconditions: 
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value pointed to by 
this
with the result of the computation applied to
the value pointed to by 
this and the given operand
.  Memory is affected according to the value of 
order.Remarks: If the result is not a representable value for its type (
[expr.pre])
the result is unspecified,
but the operations otherwise have no undefined behavior
.  Atomic arithmetic operations on 
floating-point-type
should conform to the 
numeric_limits<floating-point-type>
traits associated with the floating-point type (
[limits.syn])
.The floating-point environment (
[cfenv]) for
atomic arithmetic operations on 
floating-point-type
may be different than the calling thread's floating-point environment
.The arithmetic rules of floating-point atomic modify-write operations
may be different from operations on
floating-point types or atomic floating-point types
.[
Note 1: 
Tree reductions are permitted for atomic modify-write operations
. β 
end note]
- For  store_fmaximum-  and  store_fminimum- ,
the maximum and minimum computation is performed
as if by  fmaximum-  and  fminimum- , respectively,
with the value pointed to by  this-  and
the first parameter as the arguments .
- For  store_fmaximum_num-  and  store_fminimum_num- ,
the maximum and minimum computation is performed
as if by  fmaximum_num-  and  fminimum_num- , respectively,
with the value pointed to by  this-  and
the first parameter as the arguments .
- For  store_max-  and  store_min- ,
the maximum and minimum computation is performed
as if by  fmaximum_num-  and  fminimum_num- , respectively,
with the value pointed to by  this-  and
the first parameter as the arguments, except that:
 - If both arguments are NaN,
an unspecified NaN value replaces the value pointed to by  this.
- If exactly one argument is a NaN,
either the other argument or an unspecified NaN value replaces
the value pointed to by  this- ;
it is unspecified which .
- If the arguments are differently signed zeros,
which of these values replaces the value pointed to by  this-  is unspecified .
 
Recommended practice: The implementation of 
store_max and 
store_min
should treat negative zero as smaller than positive zero
. floating-point-type operator op=(floating-point-type operand) volatile noexcept;
constexpr floating-point-type operator op=(floating-point-type operand) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Equivalent to: return fetch_key(operand) op operand;
Remarks: If the result is not a representable value for its type (
[expr.pre])
the result is unspecified, but the operations otherwise have no undefined
behavior
.  Atomic arithmetic operations on 
floating-point-type
should conform to the 
std::numeric_limits<floating-point-type>
traits associated with the floating-point type (
[limits.syn])
.The floating-point environment (
[cfenv]) for atomic arithmetic operations
on 
floating-point-type may be different than the
calling thread's floating-point environment
.
namespace std {
  template<class T> struct atomic<T*> {
    using value_type = T*;
    using difference_type = ptrdiff_t;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const volatile noexcept;
    bool is_lock_free() const noexcept;
    constexpr atomic() noexcept;
    constexpr atomic(T*) noexcept;
    atomic(const atomic&) = delete;
    atomic& operator=(const atomic&) = delete;
    atomic& operator=(const atomic&) volatile = delete;
    void store(T*, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store(T*, memory_order = memory_order::seq_cst) noexcept;
    T* operator=(T*) volatile noexcept;
    constexpr T* operator=(T*) noexcept;
    T* load(memory_order = memory_order::seq_cst) const volatile noexcept;
    constexpr T* load(memory_order = memory_order::seq_cst) const noexcept;
    operator T*() const volatile noexcept;
    constexpr operator T*() const noexcept;
    T* exchange(T*, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr T* exchange(T*, memory_order = memory_order::seq_cst) noexcept;
    bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept;
    constexpr bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept;
    bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept;
    constexpr bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept;
    bool compare_exchange_weak(T*&, T*,
                               memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr bool compare_exchange_weak(T*&, T*,
                               memory_order = memory_order::seq_cst) noexcept;
    bool compare_exchange_strong(T*&, T*,
                                 memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr bool compare_exchange_strong(T*&, T*,
                                 memory_order = memory_order::seq_cst) noexcept;
    T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
    T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
    T* fetch_max(T*, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr T* fetch_max(T*, memory_order = memory_order::seq_cst) noexcept;
    T* fetch_min(T*, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr T* fetch_min(T*, memory_order = memory_order::seq_cst) noexcept;
    void store_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
    void store_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
    void store_max(T*, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_max(T*, memory_order = memory_order::seq_cst) noexcept;
    void store_min(T*, memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void store_min(T*, memory_order = memory_order::seq_cst) noexcept;
    T* operator++(int) volatile noexcept;
    constexpr T* operator++(int) noexcept;
    T* operator--(int) volatile noexcept;
    constexpr T* operator--(int) noexcept;
    T* operator++() volatile noexcept;
    constexpr T* operator++() noexcept;
    T* operator--() volatile noexcept;
    constexpr T* operator--() noexcept;
    T* operator+=(ptrdiff_t) volatile noexcept;
    constexpr T* operator+=(ptrdiff_t) noexcept;
    T* operator-=(ptrdiff_t) volatile noexcept;
    constexpr T* operator-=(ptrdiff_t) noexcept;
    void wait(T*, memory_order = memory_order::seq_cst) const volatile noexcept;
    constexpr void wait(T*, memory_order = memory_order::seq_cst) const noexcept;
    void notify_one() volatile noexcept;
    constexpr void notify_one() noexcept;
    void notify_all() volatile noexcept;
    constexpr void notify_all() noexcept;
  };
}
 There is a partial specialization of the 
atomic class template for pointers
.Specializations of this partial specialization are standard-layout structs
.They each have a trivial destructor
.Descriptions are provided below only for members that differ from the primary template
.The following operations perform pointer arithmetic
.The correspondence among key, operator, and computation is specified
in Table 
156.T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Mandates: 
T is a complete object type
.  [
Note 1: 
Pointer arithmetic on 
void* or function pointers is ill-formed
. β 
end note]
Effects: Atomically replaces the value pointed to by
this with the result of the computation applied to the
value pointed to by 
this and the given 
operand.  Memory is affected according to the value of 
order.Returns: Atomically, the value pointed to by 
this immediately before the effects
. Remarks: The result may be an undefined address,
but the operations otherwise have no undefined behavior
. For 
fetch_max and 
fetch_min, the maximum and minimum
computation is performed as if by 
max and 
min
algorithms (
[alg.min.max]), respectively, with the object value and the first
parameter as the arguments
.[
Note 2: 
If the pointers point to different complete objects (or subobjects thereof),
the 
< operator does not establish a strict weak ordering
(Table 
29, 
[expr.rel])
. β 
end note]
void store_key(see above operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store_key(see above operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Mandates: 
T is a complete object type
.  [
Note 3: 
Pointer arithmetic on 
void* or function pointers is ill-formed
. β 
end note]
Effects: Atomically replaces the value pointed to by 
this
with the result of the computation applied to
the value pointed to by 
this and the given 
operand.  Memory is affected according to the value of 
order.Remarks: The result may be an undefined address,
but the operations otherwise have no undefined behavior
.  For 
store_max and 
store_min,
the maximum and minimum computation is performed
as if by 
max and 
min algorithms (
[alg.min.max]), respectively,
with the value pointed to by 
this and
the first parameter as the arguments
.[
Note 4: 
If the pointers point to different complete objects (or subobjects thereof),
the 
< operator does not establish
a strict weak ordering (Table 
29, 
[expr.rel])
. β 
end note]
T* operator op=(ptrdiff_t operand) volatile noexcept;
constexpr T* operator op=(ptrdiff_t operand) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Equivalent to: return fetch_key(operand) op operand;
value_type operator++(int) volatile noexcept;
constexpr value_type operator++(int) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Equivalent to: return fetch_add(1);
value_type operator--(int) volatile noexcept;
constexpr value_type operator--(int) noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Equivalent to: return fetch_sub(1);
value_type operator++() volatile noexcept;
constexpr value_type operator++() noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Equivalent to: return fetch_add(1) + 1;
value_type operator--() volatile noexcept;
constexpr value_type operator--() noexcept;
Constraints: For the 
volatile overload of this function,
is_always_lock_free is 
true. Effects: Equivalent to: return fetch_sub(1) - 1;
The library provides partial specializations of the 
atomic template
for shared-ownership smart pointers (
[util.sharedptr])
.[
Note 1: 
The partial specializations are declared in header 
. β 
end note]
The template parameter 
T of these partial specializations
may be an incomplete type
.All changes to an atomic smart pointer in 
[util.smartptr.atomic], and
all associated 
use_count increments,
are guaranteed to be performed atomically
.Associated 
use_count decrements
are sequenced after the atomic operation,
but are not required to be part of it
.Any associated deletion and deallocation
are sequenced after the atomic update step and
are not part of the atomic operation
.[
Note 2: 
If the atomic operation uses locks,
locks acquired by the implementation
will be held when any 
use_count adjustments are performed, and
will not be held when any destruction or deallocation
resulting from this is performed
. β 
end note]
[
Example 1: 
template<typename T> class atomic_list {
  struct node {
    T t;
    shared_ptr<node> next;
  };
  atomic<shared_ptr<node>> head;
public:
  shared_ptr<node> find(T t) const {
    auto p = head.load();
    while (p && p->t != t)
      p = p->next;
    return p;
  }
  void push_front(T t) {
    auto p = make_shared<node>();
    p->t = t;
    p->next = head;
    while (!head.compare_exchange_weak(p->next, p)) {}
  }
};
 β 
end example]
namespace std {
  template<class T> struct atomic<shared_ptr<T>> {
    using value_type = shared_ptr<T>;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const noexcept;
    constexpr atomic() noexcept;
    constexpr atomic(nullptr_t) noexcept : atomic() { }
    constexpr atomic(shared_ptr<T> desired) noexcept;
    atomic(const atomic&) = delete;
    void operator=(const atomic&) = delete;
    constexpr shared_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
    constexpr operator shared_ptr<T>() const noexcept;
    constexpr void store(shared_ptr<T> desired,
                         memory_order order = memory_order::seq_cst) noexcept;
    constexpr void operator=(shared_ptr<T> desired) noexcept;
    constexpr void operator=(nullptr_t) noexcept;
    constexpr shared_ptr<T> exchange(shared_ptr<T> desired,
                                     memory_order order = memory_order::seq_cst) noexcept;
    constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
                                         memory_order success, memory_order failure) noexcept;
    constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
                                           memory_order success, memory_order failure) noexcept;
    constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
                                         memory_order order = memory_order::seq_cst) noexcept;
    constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
                                           memory_order order = memory_order::seq_cst) noexcept;
    constexpr void wait(shared_ptr<T> old,
                        memory_order order = memory_order::seq_cst) const noexcept;
    constexpr void notify_one() noexcept;
    constexpr void notify_all() noexcept;
  private:
    shared_ptr<T> p;            
  };
}
 constexpr atomic() noexcept;
Effects: Value-initializes 
p. constexpr atomic(shared_ptr<T> desired) noexcept;
Effects: Initializes the object with the value 
desired.   [
Note 1: 
It is possible to have an access to
an atomic object 
A race with its construction,
for example,
by communicating the address of the just-constructed object 
A
to another thread via 
memory_order::relaxed operations
on a suitable atomic pointer variable, and
then immediately accessing 
A in the receiving thread
.This results in undefined behavior
. β 
end note]
constexpr void store(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value pointed to by 
this with
the value of 
desired as if by 
p.swap(desired).  Memory is affected according to the value of 
order.constexpr void operator=(shared_ptr<T> desired) noexcept;
Effects: Equivalent to 
store(desired). constexpr void operator=(nullptr_t) noexcept;
Effects: Equivalent to 
store(nullptr). constexpr shared_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of 
order. Returns: Atomically returns 
p. constexpr operator shared_ptr<T>() const noexcept;
Effects: Equivalent to: return load();
constexpr shared_ptr<T> exchange(shared_ptr<T> desired,
                                 memory_order order = memory_order::seq_cst) noexcept;
Effects: Atomically replaces 
p with 
desired
as if by 
p.swap(desired).  Memory is affected according to the value of 
order.Returns: Atomically returns the value of 
p immediately before the effects
. constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
                                     memory_order success, memory_order failure) noexcept;
constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
                                       memory_order success, memory_order failure) noexcept;
Preconditions: 
failure is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: If 
p is equivalent to 
expected,
assigns 
desired to 
p and
has synchronization semantics corresponding to the value of 
success,
otherwise assigns 
p to 
expected and
has synchronization semantics corresponding to the value of 
failure. Returns: 
true if 
p was equivalent to 
expected,
false otherwise
. Remarks: Two 
shared_ptr objects are equivalent if
they store the same pointer value and
either share ownership or are both empty
.  The weak form may fail spuriously
.If the operation returns 
true,
expected is not accessed after the atomic update and
the operation is an atomic read-modify-write operation (
[intro.multithread])
on the memory pointed to by 
this.Otherwise, the operation is an atomic load operation on that memory, and
expected is updated with the existing value
read from the atomic object in the attempted atomic update
.The 
use_count update corresponding to the write to 
expected
is part of the atomic operation
.The write to 
expected itself
is not required to be part of the atomic operation
.constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
                                     memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:
return compare_exchange_weak(expected, desired, order, fail_order);
where 
fail_order is the same as 
order
except that a value of 
memory_order::acq_rel
shall be replaced by the value 
memory_order::acquire and
a value of 
memory_order::release
shall be replaced by the value 
memory_order::relaxed. constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
                                       memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:
return compare_exchange_strong(expected, desired, order, fail_order);
where 
fail_order is the same as 
order
except that a value of 
memory_order::acq_rel
shall be replaced by the value 
memory_order::acquire and
a value of 
memory_order::release
shall be replaced by the value 
memory_order::relaxed. constexpr void wait(shared_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
- Evaluates  load(order)-  and compares it to  old.
- If the two are not equivalent, returns .
- Blocks until it
  is unblocked by an atomic notifying operation or is unblocked spuriously .
 Remarks: Two 
shared_ptr objects are equivalent
if they store the same pointer and either share ownership or are both empty
.  constexpr void notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. constexpr void notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked (
[atomics.wait]) by this call
. namespace std {
  template<class T> struct atomic<weak_ptr<T>> {
    using value_type = weak_ptr<T>;
    static constexpr bool is_always_lock_free = implementation-defined;
    bool is_lock_free() const noexcept;
    constexpr atomic() noexcept;
    constexpr atomic(weak_ptr<T> desired) noexcept;
    atomic(const atomic&) = delete;
    void operator=(const atomic&) = delete;
    constexpr weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
    constexpr operator weak_ptr<T>() const noexcept;
    constexpr void store(weak_ptr<T> desired,
                         memory_order order = memory_order::seq_cst) noexcept;
    constexpr void operator=(weak_ptr<T> desired) noexcept;
    constexpr weak_ptr<T> exchange(weak_ptr<T> desired,
                                   memory_order order = memory_order::seq_cst) noexcept;
    constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
                                         memory_order success, memory_order failure) noexcept;
    constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
                                           memory_order success, memory_order failure) noexcept;
    constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
                                         memory_order order = memory_order::seq_cst) noexcept;
    constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
                                           memory_order order = memory_order::seq_cst) noexcept;
    constexpr void wait(weak_ptr<T> old,
                        memory_order order = memory_order::seq_cst) const noexcept;
    constexpr void notify_one() noexcept;
    constexpr void notify_all() noexcept;
  private:
    weak_ptr<T> p;              
  };
}
 constexpr atomic() noexcept;
Effects: Value-initializes 
p. constexpr atomic(weak_ptr<T> desired) noexcept;
Effects: Initializes the object with the value 
desired.   [
Note 1: 
It is possible to have an access to
an atomic object 
A race with its construction,
for example,
by communicating the address of the just-constructed object 
A
to another thread via 
memory_order::relaxed operations
on a suitable atomic pointer variable, and
then immediately accessing 
A in the receiving thread
.This results in undefined behavior
. β 
end note]
constexpr void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically replaces the value pointed to by 
this with
the value of 
desired as if by 
p.swap(desired).  Memory is affected according to the value of 
order.constexpr void operator=(weak_ptr<T> desired) noexcept;
Effects: Equivalent to 
store(desired). constexpr weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of 
order. Returns: Atomically returns 
p. constexpr operator weak_ptr<T>() const noexcept;
Effects: Equivalent to: return load();
constexpr weak_ptr<T> exchange(weak_ptr<T> desired,
                               memory_order order = memory_order::seq_cst) noexcept;
Effects: Atomically replaces 
p with 
desired
as if by 
p.swap(desired).  Memory is affected according to the value of 
order.Returns: Atomically returns the value of 
p immediately before the effects
. constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
                                     memory_order success, memory_order failure) noexcept;
constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
                                       memory_order success, memory_order failure) noexcept;
Preconditions: 
failure is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: If 
p is equivalent to 
expected,
assigns 
desired to 
p and
has synchronization semantics corresponding to the value of 
success,
otherwise assigns 
p to 
expected and
has synchronization semantics corresponding to the value of 
failure. Returns: 
true if 
p was equivalent to 
expected,
false otherwise
. Remarks: Two 
weak_ptr objects are equivalent if
they store the same pointer value and
either share ownership or are both empty
.  The weak form may fail spuriously
.If the operation returns 
true,
expected is not accessed after the atomic update and
the operation is an atomic read-modify-write operation (
[intro.multithread])
on the memory pointed to by 
this.Otherwise, the operation is an atomic load operation on that memory, and
expected is updated with the existing value
read from the atomic object in the attempted atomic update
.The 
use_count update corresponding to the write to 
expected
is part of the atomic operation
.The write to 
expected itself
is not required to be part of the atomic operation
.constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
                                     memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:
return compare_exchange_weak(expected, desired, order, fail_order);
where 
fail_order is the same as 
order
except that a value of 
memory_order::acq_rel
shall be replaced by the value 
memory_order::acquire and
a value of 
memory_order::release
shall be replaced by the value 
memory_order::relaxed. constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
                                       memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:
return compare_exchange_strong(expected, desired, order, fail_order);
where 
fail_order is the same as 
order
except that a value of 
memory_order::acq_rel
shall be replaced by the value 
memory_order::acquire and
a value of 
memory_order::release
shall be replaced by the value 
memory_order::relaxed. constexpr void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
- Evaluates  load(order)-  and compares it to  old.
- If the two are not equivalent, returns .
- Blocks until it
  is unblocked by an atomic notifying operation or is unblocked spuriously .
 Remarks: Two 
weak_ptr objects are equivalent
if they store the same pointer and either share ownership or are both empty
.  constexpr void notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. constexpr void notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked (
[atomics.wait]) by this call
. A non-member function template whose name matches the pattern
atomic_f or the pattern 
atomic_f_explicit
invokes the member function 
f, with the value of the
first parameter as the object expression and the values of the remaining parameters
(if any) as the arguments of the member function call, in order
.An argument
for a parameter of type 
atomic<T>::value_type* is dereferenced when
passed to the member function call
.If no such member function exists, the program is ill-formed
.[
Note 1: 
The non-member functions enable programmers to write code that can be
compiled as either C or C++, for example in a shared header file
. β 
end note]
namespace std {
  struct atomic_flag {
    constexpr atomic_flag() noexcept;
    atomic_flag(const atomic_flag&) = delete;
    atomic_flag& operator=(const atomic_flag&) = delete;
    atomic_flag& operator=(const atomic_flag&) volatile = delete;
    bool test(memory_order = memory_order::seq_cst) const volatile noexcept;
    constexpr bool test(memory_order = memory_order::seq_cst) const noexcept;
    bool test_and_set(memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr bool test_and_set(memory_order = memory_order::seq_cst) noexcept;
    void clear(memory_order = memory_order::seq_cst) volatile noexcept;
    constexpr void clear(memory_order = memory_order::seq_cst) noexcept;
    void wait(bool, memory_order = memory_order::seq_cst) const volatile noexcept;
    constexpr void wait(bool, memory_order = memory_order::seq_cst) const noexcept;
    void notify_one() volatile noexcept;
    constexpr void notify_one() noexcept;
    void notify_all() volatile noexcept;
    constexpr void notify_all() noexcept;
  };
}
The 
atomic_flag type provides the classic test-and-set functionality
.It has two states, set and clear
.Operations on an object of type 
atomic_flag shall be lock-free
.The operations should also be address-free
.The 
atomic_flag type is a standard-layout struct
.It has a trivial destructor
.constexpr atomic_flag::atomic_flag() noexcept;
Effects: Initializes 
*this to the clear state
. bool atomic_flag_test(const volatile atomic_flag* object) noexcept;
constexpr bool atomic_flag_test(const atomic_flag* object) noexcept;
bool atomic_flag_test_explicit(const volatile atomic_flag* object,
                               memory_order order) noexcept;
constexpr bool atomic_flag_test_explicit(const atomic_flag* object,
                               memory_order order) noexcept;
bool atomic_flag::test(memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr bool atomic_flag::test(memory_order order = memory_order::seq_cst) const noexcept;
For 
atomic_flag_test, let 
order be 
memory_order::seq_cst.Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Memory is affected according to the value of 
order. Returns: Atomically returns the value pointed to by 
object or 
this. bool atomic_flag_test_and_set(volatile atomic_flag* object) noexcept;
constexpr bool atomic_flag_test_and_set(atomic_flag* object) noexcept;
bool atomic_flag_test_and_set_explicit(volatile atomic_flag* object, memory_order order) noexcept;
constexpr bool atomic_flag_test_and_set_explicit(atomic_flag* object, memory_order order) noexcept;
bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) noexcept;
Effects: Atomically sets the value pointed to by 
object or by 
this to 
true.  Memory is affected according to the value of
order.Returns: Atomically, the value of the object immediately before the effects
. void atomic_flag_clear(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_clear(atomic_flag* object) noexcept;
void atomic_flag_clear_explicit(volatile atomic_flag* object, memory_order order) noexcept;
constexpr void atomic_flag_clear_explicit(atomic_flag* object, memory_order order) noexcept;
void atomic_flag::clear(memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void atomic_flag::clear(memory_order order = memory_order::seq_cst) noexcept;
Preconditions: 
order is
memory_order::relaxed,
memory_order::release, or
memory_order::seq_cst. Effects: Atomically sets the value pointed to by 
object or by 
this to
false.  Memory is affected according to the value of 
order.void atomic_flag_wait(const volatile atomic_flag* object, bool old) noexcept;
constexpr void atomic_flag_wait(const atomic_flag* object, bool old) noexcept;
void atomic_flag_wait_explicit(const volatile atomic_flag* object,
                               bool old, memory_order order) noexcept;
constexpr void atomic_flag_wait_explicit(const atomic_flag* object,
                               bool old, memory_order order) noexcept;
void atomic_flag::wait(bool old, memory_order order =
                                   memory_order::seq_cst) const volatile noexcept;
constexpr void atomic_flag::wait(bool old, memory_order order =
                                   memory_order::seq_cst) const noexcept;
For 
atomic_flag_wait,
let 
order be 
memory_order::seq_cst.Let 
flag be 
object for the non-member functions and
this for the member functions
.Preconditions: 
order is
memory_order::relaxed,
memory_order::acquire, or
memory_order::seq_cst. Effects: Repeatedly performs the following steps, in order:
- Evaluates  flag->test(order) != old.
- If the result of that evaluation is  true- , returns .
- Blocks until it
  is unblocked by an atomic notifying operation or is unblocked spuriously .
 void atomic_flag_notify_one(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_notify_one(atomic_flag* object) noexcept;
void atomic_flag::notify_one() volatile noexcept;
constexpr void atomic_flag::notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked (
[atomics.wait]) by this call,
if any such atomic waiting operations exist
. void atomic_flag_notify_all(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_notify_all(atomic_flag* object) noexcept;
void atomic_flag::notify_all() volatile noexcept;
constexpr void atomic_flag::notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked (
[atomics.wait]) by this call
. #define ATOMIC_FLAG_INIT see below
Remarks: The macro 
ATOMIC_FLAG_INIT is defined in such a way that
it can be used to initialize an object of type 
atomic_flag
to the clear state
.  The macro can be used in the form:
atomic_flag guard = ATOMIC_FLAG_INIT;
 
It is unspecified whether the macro can be used
in other initialization contexts
.For a complete static-duration object, that initialization shall be static
.This subclause introduces synchronization primitives called 
fences.Fences can have
acquire semantics, release semantics, or both
.A release fence 
A synchronizes with an acquire fence 
B if there exist
atomic operations 
X and 
Y,
where 
Y is not an atomic modify-write operation (
[atomics.order]),
both operating on some atomic object
M, such that 
A is sequenced before 
X, 
X modifies
M, 
Y is sequenced before 
B, and 
Y reads the value
written by 
X or a value written by any side effect in the hypothetical release
sequence 
X would head if it were a release operation
.A release fence 
A synchronizes with an atomic operation 
B that
performs an acquire operation on an atomic object 
M if there exists an atomic
operation 
X such that 
A is sequenced before 
X, 
X
modifies 
M, and 
B reads the value written by 
X or a value
written by any side effect in the hypothetical release sequence 
X would head if
it were a release operation
.An atomic operation 
A that is a release operation on an atomic object
M synchronizes with an acquire fence 
B if there exists some atomic
operation 
X on 
M such that 
X is sequenced before 
B
and reads the value written by 
A or a value written by any side effect in the
release sequence headed by 
A.extern "C" constexpr void atomic_thread_fence(memory_order order) noexcept;
Effects: Depending on the value of 
order, this operation:
- has no effects, if order == memory_order::relaxed;
- is an acquire fence, if order == memory_order::acquire;
- is a release fence, if order == memory_order::release;
- is both an acquire fence and a release fence, if order == memory_order::acq_rel;
- is a sequentially consistent acquire and release fence, if order == memory_order::seq_cst.
 extern "C" constexpr void atomic_signal_fence(memory_order order) noexcept;
Effects: Equivalent to 
atomic_thread_fence(order), except that
the resulting ordering constraints are established only between a thread and a
signal handler executed in the same thread
. [
Note 1: 
atomic_signal_fence can be used to specify the order in which actions
performed by the thread become visible to the signal handler
.  Compiler optimizations and reorderings of loads and stores are inhibited in
the same way as with 
atomic_thread_fence, but the hardware fence instructions
that 
atomic_thread_fence would have inserted are not emitted
. β 
end note]
The header  provides the following definitions:
Each 
using-declaration for some name 
A in the synopsis above
makes available the same entity as 
std::A
declared in 
.Each macro listed above other than 
_Atomic(T)
is defined as in 
.It is unspecified whether 
 makes available
any declarations in namespace 
std.Neither the 
_Atomic macro,
nor any of the non-macro global namespace declarations,
are provided by any C++ standard library header
other than 
.Recommended practice: Implementations should ensure
that C and C++ representations of atomic objects are compatible,
so that the same object can be accessed as both an 
_Atomic(T)
from C code and an 
atomic<T> from C++ code
.  The representations should be the same, and
the mechanisms used to ensure atomicity and memory ordering
should be compatible
.Subclause 
[thread.mutex] provides mechanisms for mutual exclusion: mutexes, locks, and call
once
.A mutex object facilitates protection against data races and allows safe synchronization of
data between 
execution agents.An execution agent 
owns a mutex from the time it successfully calls one of the
lock functions until it calls unlock
.Mutexes can be either recursive or non-recursive, and can
grant simultaneous ownership to one or many execution agents
.Both
recursive and non-recursive mutexes are supplied
.The 
mutex types are the standard library types 
mutex,
recursive_mutex, 
timed_mutex, 
recursive_timed_mutex,
shared_mutex, and 
shared_timed_mutex.In this description, 
m denotes an object of a mutex type
. If initialization of an object of a mutex type fails,
an exception of type 
system_error is thrown
.The mutex types are neither copyable nor movable
. The error conditions for error codes, if any, reported by member functions of the mutex types
are as follows:
- resource_unavailable_try_again-  β if any native handle type manipulated is not available .
 
- operation_not_permitted-  β if the thread does not have the
privilege to perform the operation .
 
- invalid_argument-  β if any native handle type manipulated as part of mutex
construction is incorrect .
 
The implementation provides lock and unlock operations, as described below
.For purposes of determining the existence of a data race, these behave as
atomic operations (
[intro.multithread])
.The lock and unlock operations on
a single mutex appears to occur in a single total order
.[
Note 3: 
Construction and
destruction of an object of a mutex type need not be thread-safe; other
synchronization can be used to ensure that mutex objects are initialized
and visible to other threads
. β 
end note]
The expression m.lock() is well-formed and has the following semantics:
Preconditions: If 
m is of type 
mutex, 
timed_mutex,
shared_mutex, or 
shared_timed_mutex, the calling
thread does not own the mutex
. Effects: Blocks the calling thread until ownership of the mutex can be obtained for the calling thread
. Postconditions: The calling thread owns the mutex
. Error conditions: 
- operation_not_permitted-  β if the thread does not have the
privilege to perform the operation .
 
- resource_deadlock_would_occur-  β if the implementation detects
that a deadlock would occur .
 
 The expression m.try_lock() is well-formed and has the following semantics:
Preconditions: If 
m is of type 
mutex, 
timed_mutex,
shared_mutex, or 
shared_timed_mutex, the calling
thread does not own the mutex
. Effects: Attempts to obtain ownership of the mutex for the calling thread without
blocking
.  If ownership is not obtained, there is no effect and 
try_lock()
immediately returns
.An implementation may fail to obtain the lock even if it is not
held by any other thread
.[
Note 4: 
This spurious failure is normally uncommon, but
allows interesting implementations based on a simple
compare and exchange (
[atomics])
. β 
end note]
An implementation should ensure that 
try_lock() does not consistently return 
false
in the absence of contending mutex acquisitions
.Synchronization: If 
try_lock() returns 
true, prior 
unlock() operations
on the same object 
synchronize with this operation
.  [
Note 5: 
Since 
lock() does not synchronize with a failed subsequent
try_lock(), the visibility rules are weak enough that little would be
known about the state after a failure, even in the absence of spurious failures
. β 
end note]
Returns: 
true if ownership was obtained, otherwise 
false. The expression m.unlock() is well-formed and has the following semantics:
Preconditions: The calling thread owns the mutex
. Effects: Releases the calling thread's ownership of the mutex
. Synchronization: This operation 
synchronizes with subsequent
lock operations that obtain ownership on the same object
. namespace std {
  class mutex {
  public:
    constexpr mutex() noexcept;
    ~mutex();
    mutex(const mutex&) = delete;
    mutex& operator=(const mutex&) = delete;
    void lock();
    bool try_lock();
    void unlock();
    using native_handle_type = implementation-defined;          
    native_handle_type native_handle();                         
  };
}
 The class 
mutex provides a non-recursive mutex with exclusive ownership
semantics
.If one thread owns a mutex object, attempts by another thread to acquire
ownership of that object will fail (for 
try_lock()) or block (for
lock()) until the owning thread has released ownership with a call to
unlock().[
Note 1: 
After a thread 
A has called 
unlock(), releasing a mutex, it is possible for another
thread 
B to lock the same mutex, observe that it is no longer in use, unlock it, and
destroy it, before thread 
A appears to have returned from its unlock call
.Conforming implementations
handle such scenarios correctly, as long as thread 
A does not access the
mutex after the unlock call returns
.These cases typically occur when a reference-counted object
contains a mutex that is used to protect the reference count
. β 
end note]
[
Note 2: 
A program can deadlock if the thread that owns a 
mutex object calls
lock() on that object
.If the implementation can detect the deadlock,
a 
resource_deadlock_would_occur error condition might be observed
. β 
end note]
The behavior of a program is undefined if
it destroys a 
mutex object owned by any thread or
a thread terminates while owning a 
mutex object
.namespace std {
  class recursive_mutex {
  public:
    recursive_mutex();
    ~recursive_mutex();
    recursive_mutex(const recursive_mutex&) = delete;
    recursive_mutex& operator=(const recursive_mutex&) = delete;
    void lock();
    bool try_lock() noexcept;
    void unlock();
    using native_handle_type = implementation-defined;          
    native_handle_type native_handle();                         
  };
}
 The class 
recursive_mutex provides a recursive mutex with exclusive ownership
semantics
.If one thread owns a 
recursive_mutex object, attempts by another
thread to acquire ownership of that object will fail (for 
try_lock()) or block
(for 
lock()) until the first thread has completely released ownership
.A thread that owns a 
recursive_mutex object may acquire additional levels of
ownership by calling 
lock() or 
try_lock() on that object
.It is
unspecified how many levels of ownership may be acquired by a single thread
.If a thread
has already acquired the maximum level of ownership for a 
recursive_mutex
object, additional calls to 
try_lock() fail, and additional calls to
lock() throw an exception of type 
system_error.A thread
shall call 
unlock() once for each level of ownership acquired by calls to
lock() and 
try_lock().Only when all levels of ownership have been
released may ownership be acquired by another thread
.The behavior of a program is undefined if
- it destroys a recursive_mutex object owned by any thread or
- a thread terminates while owning a recursive_mutex object.
The 
timed mutex types are the standard library types 
timed_mutex,
recursive_timed_mutex, and 
shared_timed_mutex.They
meet the requirements set out below
.In this description, 
m denotes an object of a mutex type,
rel_time denotes an object of an
instantiation of 
duration, and 
abs_time denotes an
object of an
instantiation of 
time_point.The expression m.try_lock_for(rel_time) is well-formed
and has the following semantics:
Preconditions: If 
m is of type 
timed_mutex or
shared_timed_mutex, the calling thread does not
own the mutex
. Effects: The function attempts to obtain ownership of the mutex within the
relative timeout (
[thread.req.timing])
specified by 
rel_time.  If the time specified by 
rel_time is less than or
equal to 
rel_time.zero(), the function attempts to obtain ownership without blocking (as if by calling
try_lock())
.The function returns within the timeout specified by
rel_time only if it has obtained ownership of the mutex object
.[
Note 2: 
As
with 
try_lock(), there is no guarantee that ownership will be obtained if the
lock is available, but implementations are expected to make a strong effort to do so
. β 
end note]
Returns: 
true if ownership was obtained, otherwise 
false. The expression m.try_lock_until(abs_time) is well-formed
and has the following semantics:
Preconditions: If 
m is of type 
timed_mutex or
shared_timed_mutex, the calling thread does not own the
mutex
. Effects: The function attempts to obtain ownership of the mutex
.  If
abs_time has already passed, the function attempts to obtain ownership
without blocking (as if by calling 
try_lock())
.The function
returns before the absolute timeout (
[thread.req.timing]) specified by
abs_time only if it has obtained ownership of the mutex object
.[
Note 3: 
As with 
try_lock(), there is no guarantee that ownership will
be obtained if the lock is available, but implementations are expected to make a
strong effort to do so
. β 
end note]
Returns: 
true if ownership was obtained, otherwise 
false. namespace std {
  class timed_mutex {
  public:
    timed_mutex();
    ~timed_mutex();
    timed_mutex(const timed_mutex&) = delete;
    timed_mutex& operator=(const timed_mutex&) = delete;
    void lock();    
    bool try_lock();
    template<class Rep, class Period>
      bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template<class Clock, class Duration>
      bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock();
    using native_handle_type = implementation-defined;          
    native_handle_type native_handle();                         
  };
}
 The class 
timed_mutex provides a non-recursive mutex with exclusive ownership
semantics
.If one thread owns a 
timed_mutex object, attempts by another thread
to acquire ownership of that object will fail (for 
try_lock()) or block
(for 
lock(), 
try_lock_for(), and 
try_lock_until()) until
the owning thread has released ownership with a call to 
unlock() or the
call to 
try_lock_for() or 
try_lock_until() times out (having
failed to obtain ownership)
.The behavior of a program is undefined if
- it destroys a timed_mutex object owned by any thread,
- a thread that owns a timed_mutex object calls lock(),
try_lock(), try_lock_for(), or try_lock_until() on that object, or
- a thread terminates while owning a timed_mutex object.
namespace std {
  class recursive_timed_mutex {
  public:
    recursive_timed_mutex();
    ~recursive_timed_mutex();
    recursive_timed_mutex(const recursive_timed_mutex&) = delete;
    recursive_timed_mutex& operator=(const recursive_timed_mutex&) = delete;
    void lock();    
    bool try_lock() noexcept;
    template<class Rep, class Period>
      bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template<class Clock, class Duration>
      bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock();
    using native_handle_type = implementation-defined;          
    native_handle_type native_handle();                         
  };
}
 The class 
recursive_timed_mutex provides a recursive mutex with exclusive
ownership semantics
.If one thread owns a 
recursive_timed_mutex object,
attempts by another thread to acquire ownership of that object will fail (for
try_lock()) or block (for 
lock(), 
try_lock_for(), and
try_lock_until()) until the owning thread has completely released
ownership or the call to 
try_lock_for() or 
try_lock_until()
times out (having failed to obtain ownership)
.A thread that owns a 
recursive_timed_mutex object may acquire additional
levels of ownership by calling 
lock(), 
try_lock(),
try_lock_for(), or 
try_lock_until() on that object
.It is
unspecified how many levels of ownership may be acquired by a single thread
.If
a thread has already acquired the maximum level of ownership for a
recursive_timed_mutex object, additional calls to 
try_lock(),
try_lock_for(), or 
try_lock_until() fail, and additional
calls to 
lock() throw an exception of type 
system_error.A
thread shall call 
unlock() once for each level of ownership acquired by
calls to 
lock(), 
try_lock(), 
try_lock_for(), and
try_lock_until().Only when all levels of ownership have been released
may ownership of the object be acquired by another thread
.The behavior of a program is undefined if
- it destroys a recursive_timed_mutex object owned by any thread, or
- a thread terminates while owning a recursive_timed_mutex object.
  In this description,
m denotes an object of a shared mutex type
.  Multiple execution agents can
simultaneously hold a shared lock ownership of a shared mutex type
.But no
execution agent holds a shared lock while another execution agent holds an
exclusive lock on the same shared mutex type, and vice-versa
.The maximum
number of execution agents which can share a shared lock on a single shared
mutex type is unspecified, but is at least 10000
.If more than the
maximum number of execution agents attempt to obtain a shared lock, the
excess execution agents block until the number of shared locks are
reduced below the maximum amount by other execution agents releasing their
shared lock
. The expression m.lock_shared() is well-formed and has the
following semantics:
Preconditions: The calling thread has no ownership of the mutex
. Effects: Blocks the calling thread until shared ownership of the mutex can be obtained for the calling thread
.  If an exception is thrown then a shared lock has not been acquired for the current thread
.Synchronization: Prior 
unlock() operations on the same object synchronize with (
[intro.multithread]) this operation
. Postconditions: The calling thread has a shared lock on the mutex
. Error conditions: 
- operation_not_permitted-  β if the thread does not have the privilege to perform the operation .
 
- resource_deadlock_would_occur-  β if the implementation detects that a deadlock would occur .
 
 The expression m.unlock_shared() is well-formed and has the following semantics:
Preconditions: The calling thread holds a shared lock on the mutex
. Effects: Releases a shared lock on the mutex held by the calling thread
. Synchronization: This operation 
synchronizes with subsequent
lock() operations that obtain ownership on the same object
. The expression m.try_lock_shared() is well-formed and has the following semantics:
Preconditions: The calling thread has no ownership of the mutex
. Effects: Attempts to obtain shared ownership of the mutex for the calling
thread without blocking
.  If shared ownership is not obtained, there is no
effect and 
try_lock_shared() immediately returns
.An implementation
may fail to obtain the lock even if it is not held by any other thread
.Synchronization: If 
try_lock_shared() returns 
true, prior 
unlock()
operations on the same object synchronize with (
[intro.multithread]) this
operation
. Returns: 
true if the shared lock was acquired, otherwise 
false. namespace std {
  class shared_mutex {
  public:
    shared_mutex();
    ~shared_mutex();
    shared_mutex(const shared_mutex&) = delete;
    shared_mutex& operator=(const shared_mutex&) = delete;
    
    void lock();                
    bool try_lock();
    void unlock();
    
    void lock_shared();         
    bool try_lock_shared();
    void unlock_shared();
    using native_handle_type = implementation-defined;          
    native_handle_type native_handle();                         
  };
}
 The class 
shared_mutex provides a non-recursive mutex
with shared ownership semantics
.The behavior of a program is undefined if
- it destroys a shared_mutex object owned by any thread,
- a thread attempts to recursively gain any ownership of a shared_mutex, or
- a thread terminates while possessing any ownership of a shared_mutex.
shared_mutex may be a synonym for 
shared_timed_mutex.   In this description,
m denotes an object of a shared timed mutex type,
rel_time denotes an object of an instantiation of
duration (
[time.duration]), and
abs_time denotes an object of an instantiation of
time_point. The expression m.try_lock_shared_for(rel_time) is well-formed and
has the following semantics:
Preconditions: The calling thread has no ownership of the mutex
. Effects: Attempts to obtain
shared lock ownership for the calling thread within the relative
timeout (
[thread.req.timing]) specified by 
rel_time.  If the time
specified by 
rel_time is less than or equal to 
rel_time.zero(),
the function attempts to obtain ownership without blocking (as if by calling
try_lock_shared())
.The function returns within the timeout
specified by 
rel_time only if it has obtained shared ownership of the
mutex object
.[
Note 2: 
As with 
try_lock(), there is no guarantee that
ownership will be obtained if the lock is available, but implementations are
expected to make a strong effort to do so
. β 
end note]
If an exception is thrown then a shared lock has not been acquired for
the current thread
.Synchronization: If 
try_lock_shared_for() returns 
true, prior
unlock() operations on the same object synchronize
with (
[intro.multithread]) this operation
. Returns: 
true if the shared lock was acquired, otherwise 
false. The expression m.try_lock_shared_until(abs_time) is well-formed
and has the following semantics:
Preconditions: The calling thread has no ownership of the mutex
. Effects: The function attempts to obtain shared ownership of the mutex
.  If
abs_time has already passed, the function attempts to obtain shared
ownership without blocking (as if by calling 
try_lock_shared())
.The
function returns before the absolute timeout (
[thread.req.timing])
specified by 
abs_time only if it has obtained shared ownership of the
mutex object
.[
Note 3: 
As with 
try_lock(), there is no guarantee that
ownership will be obtained if the lock is available, but implementations are
expected to make a strong effort to do so
. β 
end note]
If an exception is thrown then a shared lock has not been acquired for
the current thread
.Synchronization: If 
try_lock_shared_until() returns 
true, prior
unlock() operations on the same object synchronize
with (
[intro.multithread]) this operation
. Returns: 
true if the shared lock was acquired, otherwise 
false. namespace std {
  class shared_timed_mutex {
  public:
    shared_timed_mutex();
    ~shared_timed_mutex();
    shared_timed_mutex(const shared_timed_mutex&) = delete;
    shared_timed_mutex& operator=(const shared_timed_mutex&) = delete;
    
    void lock();                
    bool try_lock();
    template<class Rep, class Period>
      bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template<class Clock, class Duration>
      bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock();
    
    void lock_shared();         
    bool try_lock_shared();
    template<class Rep, class Period>
      bool try_lock_shared_for(const chrono::duration<Rep, Period>& rel_time);
    template<class Clock, class Duration>
      bool try_lock_shared_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock_shared();
  };
}
 The class 
shared_timed_mutex provides a non-recursive mutex with shared
ownership semantics
.The behavior of a program is undefined if
- it destroys a shared_timed_mutex object owned by any thread,
- a thread attempts to recursively gain any ownership of a shared_timed_mutex, or
- a thread terminates while possessing any ownership of a shared_timed_mutex.
A 
lock is an object that holds a reference to a lockable object and may unlock the
lockable object during the lock's destruction (such as when leaving block scope)
.An execution
agent may use a lock to aid in managing ownership of a lockable object in an exception safe
manner
.A lock is said to 
own a lockable object if it is currently managing the
ownership of that lockable object for an execution agent
.A lock does not manage the lifetime
of the lockable object it references
.[
Note 1: 
Locks are intended to ease the burden of
unlocking the lockable object under both normal and exceptional circumstances
. β 
end note]
Some lock constructors take tag types which describe what should be done with the lockable
object during the lock's construction
.namespace std {
  struct defer_lock_t  { };     
  struct try_to_lock_t { };     
                                
  struct adopt_lock_t  { };     
                                
  inline constexpr defer_lock_t   defer_lock { };
  inline constexpr try_to_lock_t  try_to_lock { };
  inline constexpr adopt_lock_t   adopt_lock { };
}
 namespace std {
  template<class Mutex>
  class lock_guard {
  public:
    using mutex_type = Mutex;
    explicit lock_guard(mutex_type& m);
    lock_guard(mutex_type& m, adopt_lock_t);
    ~lock_guard();
    lock_guard(const lock_guard&) = delete;
    lock_guard& operator=(const lock_guard&) = delete;
  private:
    mutex_type& pm;             
  };
}
 An object of type 
lock_guard controls the ownership of a lockable object
within a scope
.A 
lock_guard object maintains ownership of a lockable
object throughout the 
lock_guard object's 
lifetime.The behavior of a program is undefined if the lockable object referenced by
pm does not exist for the entire lifetime of the 
lock_guard
object
.explicit lock_guard(mutex_type& m);
Effects: Initializes 
pm with 
m.  lock_guard(mutex_type& m, adopt_lock_t);
Preconditions: The calling thread holds a non-shared lock on 
m. Effects: Initializes 
pm with 
m. Effects: Equivalent to: pm.unlock()
namespace std {
  template<class... MutexTypes>
  class scoped_lock {
  public:
    using mutex_type = see below;     
    explicit scoped_lock(MutexTypes&... m);
    explicit scoped_lock(adopt_lock_t, MutexTypes&... m);
    ~scoped_lock();
    scoped_lock(const scoped_lock&) = delete;
    scoped_lock& operator=(const scoped_lock&) = delete;
  private:
    tuple<MutexTypes&...> pm;   
  };
}
 An object of type 
scoped_lock controls the ownership of lockable objects
within a scope
.A 
scoped_lock object maintains ownership of lockable
objects throughout the 
scoped_lock object's 
lifetime.The behavior of a program is undefined if the lockable objects referenced by
pm do not exist for the entire lifetime of the 
scoped_lock
object
.- If  sizeof...(MutexTypes)-  is one,
let  Mutex-  denote the sole type constituting the pack  MutexTypes.
- The member  typedef-name mutex_type- 
denotes the same type as  Mutex.
explicit scoped_lock(MutexTypes&... m);
Effects: Initializes 
pm with 
tie(m...).  Then if 
sizeof...(MutexTypes) is 
0, no effects
.Otherwise if 
sizeof...(MutexTypes) is 
1, then 
m.lock().explicit scoped_lock(adopt_lock_t, MutexTypes&... m);
Preconditions: The calling thread holds a non-shared lock on each element of 
m. Effects: Initializes 
pm with 
tie(m...). Effects: For all 
i in [
0, sizeof...(MutexTypes)),
get<i>(pm).unlock(). namespace std {
  template<class Mutex>
  class unique_lock {
  public:
    using mutex_type = Mutex;
    
    unique_lock() noexcept;
    explicit unique_lock(mutex_type& m);
    unique_lock(mutex_type& m, defer_lock_t) noexcept;
    unique_lock(mutex_type& m, try_to_lock_t);
    unique_lock(mutex_type& m, adopt_lock_t);
    template<class Clock, class Duration>
      unique_lock(mutex_type& m, const chrono::time_point<Clock, Duration>& abs_time);
    template<class Rep, class Period>
      unique_lock(mutex_type& m, const chrono::duration<Rep, Period>& rel_time);
    ~unique_lock();
    unique_lock(const unique_lock&) = delete;
    unique_lock& operator=(const unique_lock&) = delete;
    unique_lock(unique_lock&& u) noexcept;
    unique_lock& operator=(unique_lock&& u) noexcept;
    
    void lock();
    bool try_lock();
    template<class Rep, class Period>
      bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template<class Clock, class Duration>
      bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock();
    
    void swap(unique_lock& u) noexcept;
    mutex_type* release() noexcept;
    
    bool owns_lock() const noexcept;
    explicit operator bool() const noexcept;
    mutex_type* mutex() const noexcept;
  private:
    mutex_type* pm;             
    bool owns;                  
  };
}
 An object of type 
unique_lock controls the ownership of a lockable
object within a scope
.Ownership of the lockable object may be acquired at
construction or after construction, and may be transferred, after
acquisition, to another 
unique_lock object
.Objects of type 
unique_lock are not
copyable but are movable
.The behavior of a program is undefined if the contained pointer
pm is not null and the lockable object pointed
to by 
pm does not exist for the entire remaining
lifetime (
[basic.life]) of the 
unique_lock object
.Postconditions: 
pm == nullptr and 
owns == false. explicit unique_lock(mutex_type& m);
Postconditions: 
pm == addressof(m) and 
owns == true. unique_lock(mutex_type& m, defer_lock_t) noexcept;
Postconditions: 
pm == addressof(m) and 
owns == false. unique_lock(mutex_type& m, try_to_lock_t);
Effects: Calls 
m.try_lock(). Postconditions: 
pm == addressof(m) and 
owns == res,
where 
res is the value returned by the call to 
m.try_lock(). unique_lock(mutex_type& m, adopt_lock_t);
Preconditions: The calling thread holds a non-shared lock on 
m. Postconditions: 
pm == addressof(m) and 
owns == true. template<class Clock, class Duration>
  unique_lock(mutex_type& m, const chrono::time_point<Clock, Duration>& abs_time);
Effects: Calls 
m.try_lock_until(abs_time). Postconditions: 
pm == addressof(m) and 
owns == res,
where 
res is
the value returned by the call to 
m.try_lock_until(abs_time). template<class Rep, class Period>
  unique_lock(mutex_type& m, const chrono::duration<Rep, Period>& rel_time);
Effects: Calls 
m.try_lock_for(rel_time). Postconditions: 
pm == addressof(m) and 
owns == res,
where 
res is the value returned by the call to 
m.try_lock_for(rel_time). unique_lock(unique_lock&& u) noexcept;
Postconditions: 
pm == u_p.pm and 
owns == u_p.owns (where 
u_p is the state of 
u just prior to this construction),  
u.pm == 0 and 
u.owns == false. unique_lock& operator=(unique_lock&& u) noexcept;
Effects: Equivalent to: unique_lock(std::move(u)).swap(*this)
Effects: If 
owns calls 
pm->unlock(). Effects: As if by 
pm->lock(). Postconditions: 
owns == true. Throws: Any exception thrown by 
pm->lock().  Error conditions: 
- operation_not_permitted-  β if  pm-  is  nullptr.
 
- resource_deadlock_would_occur-  β if on entry  owns- 
is  true.
 
 Effects: As if by 
pm->try_lock(). Postconditions: 
owns == res, where 
res is the value returned by
pm->try_lock(). Returns: The value returned by 
pm->try_lock(). Throws: Any exception thrown by 
pm->try_lock().  Error conditions: 
- operation_not_permitted-  β if  pm-  is  nullptr.
 
- resource_deadlock_would_occur-  β if on entry  owns- 
is  true.
 
 template<class Clock, class Duration>
  bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
Effects: As if by 
pm->try_lock_until(abs_time). Postconditions: 
owns == res, where 
res is the value returned by
pm->try_lock_until(abs_time). Returns: The value returned by 
pm->try_lock_until(abs_time). Throws: Any exception thrown by 
pm->try_lock_until(abstime).  Error conditions: 
- operation_not_permitted-  β if  pm-  is  nullptr.
 
- resource_deadlock_would_occur-  β if on entry  owns-  is
 true.
 
 template<class Rep, class Period>
  bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
Effects: As if by 
pm->try_lock_for(rel_time). Postconditions: 
owns == res, where 
res is the value returned by 
pm->try_lock_for(rel_time). Returns: The value returned by 
pm->try_lock_for(rel_time). Throws: Any exception thrown by 
pm->try_lock_for(rel_time).  Error conditions: 
- operation_not_permitted-  β if  pm-  is  nullptr.
 
- resource_deadlock_would_occur-  β if on entry  owns-  is
 true.
 
 Effects: As if by 
pm->unlock(). Postconditions: 
owns == false. Error conditions: 
- operation_not_permitted β if on entry owns is false.
 void swap(unique_lock& u) noexcept;
Effects: Swaps the data members of 
*this and 
u. mutex_type* release() noexcept;
Postconditions: 
pm == 0 and 
owns == false. Returns: The previous value of 
pm. template<class Mutex>
  void swap(unique_lock<Mutex>& x, unique_lock<Mutex>& y) noexcept;
Effects: As if by 
x.swap(y). bool owns_lock() const noexcept;
explicit operator bool() const noexcept;
mutex_type *mutex() const noexcept;
namespace std {
  template<class Mutex>
  class shared_lock {
  public:
    using mutex_type = Mutex;
    
    shared_lock() noexcept;
    explicit shared_lock(mutex_type& m);        
    shared_lock(mutex_type& m, defer_lock_t) noexcept;
    shared_lock(mutex_type& m, try_to_lock_t);
    shared_lock(mutex_type& m, adopt_lock_t);
    template<class Clock, class Duration>
      shared_lock(mutex_type& m, const chrono::time_point<Clock, Duration>& abs_time);
    template<class Rep, class Period>
      shared_lock(mutex_type& m, const chrono::duration<Rep, Period>& rel_time);
    ~shared_lock();
    shared_lock(const shared_lock&) = delete;
    shared_lock& operator=(const shared_lock&) = delete;
    shared_lock(shared_lock&& u) noexcept;
    shared_lock& operator=(shared_lock&& u) noexcept;
    
    void lock();                                
    bool try_lock();
    template<class Rep, class Period>
      bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template<class Clock, class Duration>
      bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock();
    
    void swap(shared_lock& u) noexcept;
    mutex_type* release() noexcept;
    
    bool owns_lock() const noexcept;
    explicit operator bool() const noexcept;
    mutex_type* mutex() const noexcept;
  private:
    mutex_type* pm;                             
    bool owns;                                  
  };
}
 An object of type 
shared_lock controls the shared ownership of a
lockable object within a scope
.Shared ownership of the lockable object may be
acquired at construction or after construction, and may be transferred, after
acquisition, to another 
shared_lock object
.Objects of type
shared_lock are not copyable but are movable
.The behavior of a program
is undefined if the contained pointer 
pm is not null and the lockable
object pointed to by 
pm does not exist for the entire remaining
lifetime (
[basic.life]) of the 
shared_lock object
.Postconditions: 
pm == nullptr and 
owns == false. explicit shared_lock(mutex_type& m);
Effects: Calls 
m.lock_shared(). Postconditions: 
pm == addressof(m) and 
owns == true. shared_lock(mutex_type& m, defer_lock_t) noexcept;
Postconditions: 
pm == addressof(m) and 
owns == false. shared_lock(mutex_type& m, try_to_lock_t);
Effects: Calls 
m.try_lock_shared(). Postconditions: 
pm == addressof(m) and 
owns == res
where 
res is the
value returned by the call to 
m.try_lock_shared(). shared_lock(mutex_type& m, adopt_lock_t);
Preconditions: The calling thread holds a shared lock on 
m. Postconditions: 
pm == addressof(m) and 
owns == true. template<class Clock, class Duration>
  shared_lock(mutex_type& m,
              const chrono::time_point<Clock, Duration>& abs_time);
Effects: Calls 
m.try_lock_shared_until(abs_time). Postconditions: 
pm == addressof(m) and 
owns == res
where 
res
is the value returned by the call to 
m.try_lock_shared_until(abs_time). template<class Rep, class Period>
  shared_lock(mutex_type& m,
              const chrono::duration<Rep, Period>& rel_time);
Effects: Calls 
m.try_lock_shared_for(rel_time). Postconditions: 
pm == addressof(m) and 
owns == res
where 
res is
the value returned by the call to 
m.try_lock_shared_for(rel_time). Effects: If 
owns calls 
pm->unlock_shared(). shared_lock(shared_lock&& sl) noexcept;
Postconditions: 
pm == sl_p.pm and 
owns == sl_p.owns (where
sl_p is the state of 
sl just prior to this construction),
sl.pm == nullptr and 
sl.owns == false. shared_lock& operator=(shared_lock&& sl) noexcept;
Effects: Equivalent to: shared_lock(std::move(sl)).swap(*this)
Effects: As if by 
pm->lock_shared(). Postconditions: 
owns == true. Throws: Any exception thrown by 
pm->lock_shared().  Error conditions: 
- operation_not_permitted-  β if  pm-  is  nullptr.
 
- resource_deadlock_would_occur-  β if on entry  owns-  is
 true.
 
 Effects: As if by 
pm->try_lock_shared(). Postconditions: 
owns == res, where 
res is the value returned by
the call to 
pm->try_lock_shared(). Returns: The value returned by the call to 
pm->try_lock_shared(). Throws: Any exception thrown by 
pm->try_lock_shared().  Error conditions: 
- operation_not_permitted-  β if  pm-  is  nullptr.
 
- resource_deadlock_would_occur-  β if on entry  owns-  is
 true.
 
 template<class Clock, class Duration>
  bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
Effects: As if by 
pm->try_lock_shared_until(abs_time). Postconditions: 
owns == res, where 
res is the value returned by
the call to 
pm->try_lock_shared_until(abs_time). Returns: The value returned by the call to
pm->try_lock_shared_until(abs_time). Throws: Any exception thrown by 
pm->try_lock_shared_until(abs_time).  Error conditions: 
- operation_not_permitted-  β if  pm-  is  nullptr.
 
- resource_deadlock_would_occur-  β if on entry  owns-  is
 true.
 
 template<class Rep, class Period>
  bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
Effects: As if by 
pm->try_lock_shared_for(rel_time). Postconditions: 
owns == res, where 
res is the value returned by the call to 
pm->try_lock_shared_for(rel_time). Returns: The value returned by the call to 
pm->try_lock_shared_for(rel_time). Throws: Any exception thrown by 
pm->try_lock_shared_for(rel_time).  Error conditions: 
- operation_not_permitted-  β if  pm-  is  nullptr.
 
- resource_deadlock_would_occur-  β if on entry  owns-  is
 true.
 
 Effects: As if by 
pm->unlock_shared(). Postconditions: 
owns == false. Error conditions: 
- operation_not_permitted β if on entry owns is
false.
 void swap(shared_lock& sl) noexcept;
Effects: Swaps the data members of 
*this and 
sl. mutex_type* release() noexcept;
Postconditions: 
pm == nullptr and 
owns == false. Returns: The previous value of 
pm. template<class Mutex>
  void swap(shared_lock<Mutex>& x, shared_lock<Mutex>& y) noexcept;
Effects: As if by 
x.swap(y). bool owns_lock() const noexcept;
explicit operator bool() const noexcept;
mutex_type* mutex() const noexcept;
template<class L1, class L2, class... L3> int try_lock(L1&, L2&, L3&...);
Preconditions: Each template parameter type meets the 
Cpp17Lockable requirements
.  [
Note 1: 
The
unique_lock class template meets these requirements when suitably instantiated
. β 
end note]
Effects: Calls 
try_lock() for each argument in order beginning with the
first until all arguments have been processed or a call to 
try_lock() fails,
either by returning 
false or by throwing an exception
.  If a call to
try_lock() fails, 
unlock() is called for all prior arguments
with no further calls to 
try_lock().Returns: 
-1 if all calls to 
try_lock() returned 
true,
otherwise a zero-based index value that indicates the argument for which 
try_lock()
returned 
false. template<class L1, class L2, class... L3> void lock(L1&, L2&, L3&...);
Preconditions: Each template parameter type meets the 
Cpp17Lockable requirements
.  [
Note 2: 
The
unique_lock class template meets these requirements when suitably instantiated
. β 
end note]
Effects: All arguments are locked via a sequence of calls to 
lock(),
try_lock(), or 
unlock() on each argument
.  The sequence of calls does
not result in deadlock, but is otherwise unspecified
.[
Note 3: 
A deadlock avoidance
algorithm such as try-and-back-off can be used, but the specific algorithm is not
specified to avoid over-constraining implementations
. β 
end note]
If a call to
lock() or 
try_lock() throws an exception, 
unlock() is
called for any argument that had been locked by a call to 
lock() or
try_lock().namespace std {
  struct once_flag {
    constexpr once_flag() noexcept;
    once_flag(const once_flag&) = delete;
    once_flag& operator=(const once_flag&) = delete;
  };
}
 The class 
once_flag is an opaque data structure that 
call_once uses to
initialize data without causing a data race or deadlock
.constexpr once_flag() noexcept;
Synchronization: The construction of a 
once_flag object is not synchronized
. Postconditions: The object's internal state is set to indicate to an invocation of
call_once with the object as its initial argument that no function has been
called
. template<class Callable, class... Args>
  void call_once(once_flag& flag, Callable&& func, Args&&... args);
Mandates: 
is_invocable_v<Callable, Args...> is 
true. Effects: An execution of 
call_once that does not call its 
func is a
passive execution
.  An execution of 
call_once that calls its 
func
is an 
active execution
.An active execution evaluates
INVOKE(std::forward<Callable>(func),
std::forward<Args>(args)...) (
[func.require])
.An exceptional execution propagates the exception to the caller of
call_once.Among all executions of 
call_once for any given
once_flag: at most one is a returning execution; if there is a
returning execution, it is the last active execution; and there are
passive executions only if there is a returning execution
.[
Note 1: 
Passive
executions allow other threads to reliably observe the results produced by the
earlier returning execution
. β 
end note]
Synchronization: For any given 
once_flag: all active executions occur in a total
order; completion of an active execution 
synchronizes with
the start of the next one in this total order; and the returning execution
synchronizes with the return from all passive executions
. [
Example 1: 
void init();
std::once_flag flag;
void f() {
  std::call_once(flag, init);
}
struct initializer {
  void operator()();
};
void g() {
  static std::once_flag flag2;
  std::call_once(flag2, initializer());
}
class information {
  std::once_flag verified;
  void verifier();
public:
  void verify() { std::call_once(verified, &information::verifier, *this); }
};
 β 
end example]
Condition variables provide synchronization primitives used to block a thread until
notified by some other thread that some condition is met or until a system time is
reached
.Class 
condition_variable provides a condition variable that can only
wait on an object of type 
unique_lock<mutex>, allowing the implementation
to be more efficient
.Class 
condition_variable_any provides a general
condition variable that can wait on objects of user-supplied lock types
.Condition variables permit concurrent invocation of the 
wait, 
wait_for,
wait_until, 
notify_one and 
notify_all member functions
.The executions of 
notify_one and 
notify_all
are atomic
.The executions of 
wait, 
wait_for, and 
wait_until are performed
in three atomic parts:
| 1. | the release of the mutex and entry into the waiting state; | 
| 2. | the unblocking of the wait; and | 
| 3. | the reacquisition of the lock. | 
The implementation behaves as if all executions of 
notify_one, 
notify_all, and each
part of the 
wait, 
wait_for, and 
wait_until executions are
executed in a single unspecified total order consistent with the βhappens beforeβ order
.Condition variable construction and destruction need not be synchronized
.void notify_all_at_thread_exit(condition_variable& cond, unique_lock<mutex> lk);
Preconditions: 
lk is locked by the calling thread and either
- no other thread is waiting on cond, or
- lk.mutex() returns the same value for each of the lock arguments
supplied by all concurrently waiting (via wait, wait_for,
or wait_until) threads.
 Effects: Transfers ownership of the lock associated with 
lk into
internal storage and schedules 
cond to be notified when the current
thread exits, after all objects with thread storage duration associated with
the current thread have been destroyed
.  This notification is equivalent to:
lk.unlock();
cond.notify_all();
Synchronization: The implied 
lk.unlock() call is sequenced after the destruction of
all objects with thread storage duration associated with the current thread
. [
Note 1: 
The supplied lock is held until the thread exits,
which might cause deadlock due to lock ordering issues
. β 
end note]
[
Note 2: 
It is the user's responsibility to ensure that waiting threads
do not incorrectly assume that the thread has finished if they experience
spurious wakeups
.This typically requires that the condition being waited
for is satisfied while holding the lock on 
lk, and that this lock
is not released and reacquired prior to calling 
notify_all_at_thread_exit. β 
end note]
namespace std {
  class condition_variable {
  public:
    condition_variable();
    ~condition_variable();
    condition_variable(const condition_variable&) = delete;
    condition_variable& operator=(const condition_variable&) = delete;
    void notify_one() noexcept;
    void notify_all() noexcept;
    void wait(unique_lock<mutex>& lock);
    template<class Predicate>
      void wait(unique_lock<mutex>& lock, Predicate pred);
    template<class Clock, class Duration>
      cv_status wait_until(unique_lock<mutex>& lock,
                           const chrono::time_point<Clock, Duration>& abs_time);
    template<class Clock, class Duration, class Predicate>
      bool wait_until(unique_lock<mutex>& lock,
                      const chrono::time_point<Clock, Duration>& abs_time,
                      Predicate pred);
    template<class Rep, class Period>
      cv_status wait_for(unique_lock<mutex>& lock,
                         const chrono::duration<Rep, Period>& rel_time);
    template<class Rep, class Period, class Predicate>
      bool wait_for(unique_lock<mutex>& lock,
                    const chrono::duration<Rep, Period>& rel_time,
                    Predicate pred);
    using native_handle_type = implementation-defined;          
    native_handle_type native_handle();                         
  };
}
 The class 
condition_variable is a standard-layout class (
[class.prop])
.Error conditions: 
- resource_unavailable_try_again β if some non-memory resource
limitation prevents initialization.
 Preconditions: There is no thread blocked on 
*this.  [
Note 1: 
That is, all
threads have been notified; they can subsequently block on the lock specified in the
wait
.This relaxes the usual rules, which would have required all wait calls to happen before
destruction
.Only the notification to unblock the wait needs to happen before destruction
.Undefined behavior ensues if a thread waits on 
*this once the destructor has
been started, especially when the waiting threads are calling the wait functions in a loop or
using the overloads of 
wait, 
wait_for, or 
wait_until that take a predicate
. β 
end note]
void notify_one() noexcept;
Effects: If any threads are blocked waiting for 
*this, unblocks one of those threads
. void notify_all() noexcept;
Effects: Unblocks all threads that are blocked waiting for 
*this. void wait(unique_lock<mutex>& lock);
Preconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread, and either
- no other thread is waiting on this condition_variable object or
- lock.mutex() returns the same value for each of the lock
arguments supplied by all concurrently waiting (via wait,
wait_for, or wait_until) threads.
 Effects: 
- Atomically calls  lock.unlock()-  and blocks on  *this.
- When unblocked, calls  lock.lock()-  (possibly blocking on the lock), then returns .
- The function will unblock when signaled by a call to  notify_one()- 
or a call to  notify_all()- , or spuriously .
 Postconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread
. Remarks: If the function fails to meet the postcondition, 
terminate()
is invoked (
[except.terminate])
.  [
Note 2: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Predicate>
  void wait(unique_lock<mutex>& lock, Predicate pred);
Preconditions: 
lock.owns_lock() is 
true and 
lock.mutex() is
locked by the calling thread, and either
- no other thread is waiting on this condition_variable object or
- lock.mutex() returns the same value for each of the lock
arguments supplied by all concurrently waiting (via wait,
wait_for, or wait_until) threads.
 Effects: Equivalent to:
while (!pred())
  wait(lock);
Postconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread
. Throws: Any exception thrown by 
pred. Remarks: If the function fails to meet the postcondition, 
terminate()
is invoked (
[except.terminate])
.  [
Note 3: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Clock, class Duration>
  cv_status wait_until(unique_lock<mutex>& lock,
                       const chrono::time_point<Clock, Duration>& abs_time);
Preconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread, and either
- no other thread is waiting on this condition_variable object or
- lock.mutex() returns the same value for each of the lock
arguments supplied by all concurrently waiting (via wait,
wait_for, or wait_until) threads.
 Effects: 
- Atomically calls  lock.unlock()-  and blocks on  *this.
- When unblocked, calls  lock.lock()-  (possibly blocking on the lock), then returns .
- The function will unblock when signaled by a call to  notify_one()- , a call to  notify_all()- ,
expiration of the absolute timeout ( [thread.req.timing]- ) specified by  abs_time- ,
or spuriously .
- If the function exits via an exception,  lock.lock()-  is called prior to exiting the function .
 Postconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread
. Returns: 
cv_status::timeout if
the absolute timeout (
[thread.req.timing]) specified by 
abs_time expired,
otherwise 
cv_status::no_timeout. Remarks: If the function fails to meet the postcondition, 
terminate()
is invoked (
[except.terminate])
.  [
Note 4: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Rep, class Period>
  cv_status wait_for(unique_lock<mutex>& lock,
                     const chrono::duration<Rep, Period>& rel_time);
Preconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread, and either
- no other thread is waiting on this condition_variable object or
- lock.mutex() returns the same value for each of the lock arguments
supplied by all concurrently waiting (via wait, wait_for, or
wait_until) threads.
 Effects: Equivalent to:
return wait_until(lock, chrono::steady_clock::now() + rel_time);
Postconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread
. Returns: 
cv_status::timeout if
the relative timeout (
[thread.req.timing]) specified by 
rel_time expired,
otherwise 
cv_status::no_timeout. Remarks: If the function fails to meet the postcondition, 
terminate
is invoked (
[except.terminate])
.  [
Note 5: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Clock, class Duration, class Predicate>
  bool wait_until(unique_lock<mutex>& lock,
                  const chrono::time_point<Clock, Duration>& abs_time,
                  Predicate pred);
Preconditions: 
lock.owns_lock() is 
true and 
lock.mutex() is
locked by the calling thread, and either
- no other thread is waiting on this condition_variable object or
- lock.mutex() returns the same value for each of the lock
arguments supplied by all concurrently waiting (via wait,
wait_for, or wait_until) threads.
 Effects: Equivalent to:
while (!pred())
  if (wait_until(lock, abs_time) == cv_status::timeout)
    return pred();
return true;
Postconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread
. [
Note 6: 
The returned value indicates whether the predicate evaluated to
true regardless of whether the timeout was triggered
. β 
end note]
Remarks: If the function fails to meet the postcondition, 
terminate()
is invoked (
[except.terminate])
.  [
Note 7: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Rep, class Period, class Predicate>
  bool wait_for(unique_lock<mutex>& lock,
                const chrono::duration<Rep, Period>& rel_time,
                Predicate pred);
Preconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread, and either
- no other thread is waiting on this condition_variable object or
- lock.mutex() returns the same value for each of the lock arguments
supplied by all concurrently waiting (via wait, wait_for, or
wait_until) threads.
 Effects: Equivalent to:
return wait_until(lock, chrono::steady_clock::now() + rel_time, std::move(pred));
[
Note 8: 
There is no blocking if 
pred() is initially 
true, even if the
timeout has already expired
. β 
end note]
Postconditions: 
lock.owns_lock() is 
true and 
lock.mutex()
is locked by the calling thread
. [
Note 9: 
The returned value indicates whether the predicate evaluates to 
true
regardless of whether the timeout was triggered
. β 
end note]
Remarks: If the function fails to meet the postcondition, 
terminate()
is invoked (
[except.terminate])
.  [
Note 10: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
 [
Note 1: 
All of the standard
mutex types meet this requirement
.If a type other than one of the
standard mutex types or a 
unique_lock wrapper for a standard mutex type
is used with 
condition_variable_any, any
necessary synchronization is assumed to be in place with respect to the predicate associated
with the 
condition_variable_any instance
. β 
end note]
 namespace std {
  class condition_variable_any {
  public:
    condition_variable_any();
    ~condition_variable_any();
    condition_variable_any(const condition_variable_any&) = delete;
    condition_variable_any& operator=(const condition_variable_any&) = delete;
    void notify_one() noexcept;
    void notify_all() noexcept;
    
    template<class Lock>
      void wait(Lock& lock);
    template<class Lock, class Predicate>
      void wait(Lock& lock, Predicate pred);
    template<class Lock, class Clock, class Duration>
      cv_status wait_until(Lock& lock, const chrono::time_point<Clock, Duration>& abs_time);
    template<class Lock, class Clock, class Duration, class Predicate>
      bool wait_until(Lock& lock, const chrono::time_point<Clock, Duration>& abs_time,
                      Predicate pred);
    template<class Lock, class Rep, class Period>
      cv_status wait_for(Lock& lock, const chrono::duration<Rep, Period>& rel_time);
    template<class Lock, class Rep, class Period, class Predicate>
      bool wait_for(Lock& lock, const chrono::duration<Rep, Period>& rel_time, Predicate pred);
    
    template<class Lock, class Predicate>
      bool wait(Lock& lock, stop_token stoken, Predicate pred);
    template<class Lock, class Clock, class Duration, class Predicate>
      bool wait_until(Lock& lock, stop_token stoken,
                      const chrono::time_point<Clock, Duration>& abs_time, Predicate pred);
    template<class Lock, class Rep, class Period, class Predicate>
      bool wait_for(Lock& lock, stop_token stoken,
                    const chrono::duration<Rep, Period>& rel_time, Predicate pred);
  };
}
 condition_variable_any();
Error conditions: 
- resource_unavailable_try_again-  β if some non-memory resource
limitation prevents initialization .
 
- operation_not_permitted-  β if the thread does not have the
privilege to perform the operation .
 
 ~condition_variable_any();
Preconditions: There is no thread blocked on 
*this.  [
Note 2: 
That is, all
threads have been notified; they can subsequently block on the lock specified in the
wait
.This relaxes the usual rules, which would have required all wait calls to happen before
destruction
.Only the notification to unblock the wait needs to happen before destruction
.Undefined behavior ensues if a thread waits on 
*this once the destructor has
been started, especially when the waiting threads are calling the wait functions in a loop or
using the overloads of 
wait, 
wait_for, or 
wait_until that take a predicate
. β 
end note]
void notify_one() noexcept;
Effects: If any threads are blocked waiting for 
*this, unblocks one of those threads
. void notify_all() noexcept;
Effects: Unblocks all threads that are blocked waiting for 
*this. template<class Lock>
  void wait(Lock& lock);
Effects: 
- Atomically calls  lock.unlock()-  and blocks on  *this.
- When unblocked, calls  lock.lock()-  (possibly blocking on the lock) and returns .
- The function will unblock when signaled by a call to  notify_one()- ,
a call to  notify_all()- , or spuriously .
 Postconditions: 
lock is locked by the calling thread
. Remarks: If the function fails to meet the postcondition, 
terminate()
is invoked (
[except.terminate])
.  [
Note 1: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Lock, class Predicate>
  void wait(Lock& lock, Predicate pred);
Effects: Equivalent to:
while (!pred())
  wait(lock);
template<class Lock, class Clock, class Duration>
  cv_status wait_until(Lock& lock, const chrono::time_point<Clock, Duration>& abs_time);
Effects: 
- Atomically calls  lock.unlock()-  and blocks on  *this.
- When unblocked, calls  lock.lock()-  (possibly blocking on the lock) and returns .
- The function will unblock when signaled by a call to  notify_one()- , a call to  notify_all()- ,
expiration of the absolute timeout ( [thread.req.timing]- ) specified by  abs_time- ,
or spuriously .
- If the function exits via an exception,  lock.lock()-  is called prior to exiting the function .
 Postconditions: 
lock is locked by the calling thread
. Returns: 
cv_status::timeout if
the absolute timeout (
[thread.req.timing]) specified by 
abs_time expired,
otherwise 
cv_status::no_timeout. Remarks: If the function fails to meet the postcondition, 
terminate()
is invoked (
[except.terminate])
.  [
Note 2: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Lock, class Rep, class Period>
  cv_status wait_for(Lock& lock, const chrono::duration<Rep, Period>& rel_time);
Effects: Equivalent to:
return wait_until(lock, chrono::steady_clock::now() + rel_time);
Postconditions: 
lock is locked by the calling thread
. Returns: 
cv_status::timeout if
the relative timeout (
[thread.req.timing]) specified by 
rel_time expired,
otherwise 
cv_status::no_timeout. Remarks: If the function fails to meet the postcondition, 
terminate
is invoked (
[except.terminate])
.  [
Note 3: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Lock, class Clock, class Duration, class Predicate>
  bool wait_until(Lock& lock, const chrono::time_point<Clock, Duration>& abs_time, Predicate pred);
Effects: Equivalent to:
while (!pred())
  if (wait_until(lock, abs_time) == cv_status::timeout)
    return pred();
return true;
[
Note 4: 
There is no blocking if 
pred() is initially 
true, or
if the timeout has already expired
. β 
end note]
[
Note 5: 
The returned value indicates whether the predicate evaluates to 
true
regardless of whether the timeout was triggered
. β 
end note]
template<class Lock, class Rep, class Period, class Predicate>
  bool wait_for(Lock& lock, const chrono::duration<Rep, Period>& rel_time, Predicate pred);
Effects: Equivalent to:
return wait_until(lock, chrono::steady_clock::now() + rel_time, std::move(pred));
The following wait functions will be notified
when there is a stop request on the passed 
stop_token.In that case the functions return immediately,
returning 
false if the predicate evaluates to 
false.template<class Lock, class Predicate>
  bool wait(Lock& lock, stop_token stoken, Predicate pred);
Effects: Registers for the duration of this call *this
to get notified on a stop request on stoken
during this call and then equivalent to:
while (!stoken.stop_requested()) {
  if (pred())
    return true;
  wait(lock);
}
return pred();
[
Note 1: 
The returned value indicates whether the predicate evaluated to
true regardless of whether there was a stop request
. β 
end note]
Postconditions: 
lock is locked by the calling thread
. Throws: Any exception thrown by 
pred. Remarks: If the function fails to meet the postcondition,
terminate is called (
[except.terminate])
.  [
Note 2: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Lock, class Clock, class Duration, class Predicate>
  bool wait_until(Lock& lock, stop_token stoken,
                  const chrono::time_point<Clock, Duration>& abs_time, Predicate pred);
Effects: Registers for the duration of this call *this
to get notified on a stop request on stoken
during this call and then equivalent to:
while (!stoken.stop_requested()) {
  if (pred())
    return true;
  if (wait_until(lock, abs_time) == cv_status::timeout)
    return pred();
}
return pred();
[
Note 3: 
There is no blocking if 
pred() is initially 
true,
stoken.stop_requested() was already 
true
or the timeout has already expired
. β 
end note]
[
Note 4: 
The returned value indicates whether the predicate evaluated to 
true
regardless of whether the timeout was triggered or a stop request was made
. β 
end note]
Postconditions: 
lock is locked by the calling thread
. Remarks: If the function fails to meet the postcondition,
terminate is called (
[except.terminate])
.  [
Note 5: 
This can happen if the re-locking of the mutex throws an exception
. β 
end note]
template<class Lock, class Rep, class Period, class Predicate>
  bool wait_for(Lock& lock, stop_token stoken,
                const chrono::duration<Rep, Period>& rel_time, Predicate pred);
Effects: Equivalent to:
return wait_until(lock, std::move(stoken), chrono::steady_clock::now() + rel_time,
                  std::move(pred));
Semaphores are lightweight synchronization primitives
used to constrain concurrent access to a shared resource
.They are widely used to implement other synchronization primitives and,
whenever both are applicable, can be more efficient than condition variables
.A counting semaphore is a semaphore object
that models a non-negative resource count
.A binary semaphore is a semaphore object that has only two states
.A binary semaphore should be more efficient than
the default implementation of a counting semaphore with a unit resource count
.namespace std {
  template<ptrdiff_t least_max_value = implementation-defined>
  class counting_semaphore {
  public:
    static constexpr ptrdiff_t max() noexcept;
    constexpr explicit counting_semaphore(ptrdiff_t desired);
    ~counting_semaphore();
    counting_semaphore(const counting_semaphore&) = delete;
    counting_semaphore& operator=(const counting_semaphore&) = delete;
    void release(ptrdiff_t update = 1);
    void acquire();
    bool try_acquire() noexcept;
    template<class Rep, class Period>
      bool try_acquire_for(const chrono::duration<Rep, Period>& rel_time);
    template<class Clock, class Duration>
      bool try_acquire_until(const chrono::time_point<Clock, Duration>& abs_time);
  private:
    ptrdiff_t counter;          
  };
}
 Class template 
counting_semaphore maintains an internal counter
that is initialized when the semaphore is created
.The counter is decremented when a thread acquires the semaphore, and
is incremented when a thread releases the semaphore
.If a thread tries to acquire the semaphore when the counter is zero,
the thread will block
until another thread increments the counter by releasing the semaphore
.least_max_value shall be non-negative; otherwise the program is ill-formed
. Concurrent invocations of the member functions of 
counting_semaphore,
other than its destructor, do not introduce data races
.static constexpr ptrdiff_t max() noexcept;
Returns: The maximum value of 
counter.  This value is greater than or equal to 
least_max_value.constexpr explicit counting_semaphore(ptrdiff_t desired);
Preconditions: 
desired >= 0 is 
true, and
desired <= max() is 
true. Effects: Initializes 
counter with 
desired. void release(ptrdiff_t update = 1);
Preconditions: 
update >= 0 is 
true, and
update <= max() - counter is 
true. Effects: Atomically execute 
counter += update.  Then, unblocks any threads
that are waiting for 
counter to be greater than zero
.Synchronization: Strongly happens before invocations of 
try_acquire
that observe the result of the effects
. bool try_acquire() noexcept;
Effects: Attempts to atomically decrement 
counter if it is positive,
without blocking
.  If 
counter is not decremented, there is no effect and
try_acquire immediately returns
.An implementation may fail to decrement 
counter
even if it is positive
.[
Note 1: 
This spurious failure is normally uncommon, but
allows interesting implementations
based on a simple compare and exchange (
[atomics])
. β 
end note]
An implementation should ensure that 
try_acquire
does not consistently return 
false
in the absence of contending semaphore operations
.Returns: 
true if 
counter was decremented, otherwise 
false. Effects: Repeatedly performs the following steps, in order:
-  - If the result is  true- , returns .
 
- Blocks on  *this-  until  counter-  is greater than zero .
 template<class Rep, class Period>
  bool try_acquire_for(const chrono::duration<Rep, Period>& rel_time);
template<class Clock, class Duration>
  bool try_acquire_until(const chrono::time_point<Clock, Duration>& abs_time);
Effects: Repeatedly performs the following steps, in order:
-  - If the result is  true- , returns  true.
 
-   Blocks on  *this- 
  until  counter-  is greater than zero or until the timeout expires .
- If it is unblocked by the timeout expiring, returns  false.
  
The timeout expires (
[thread.req.timing])
when the current time is after 
abs_time (for 
try_acquire_until)
or when at least 
rel_time has passed
from the start of the function (for 
try_acquire_for)
.Subclause 
[thread.coord] describes various concepts related to thread coordination, and
defines the coordination types 
latch and 
barrier.These types facilitate concurrent computation performed by a number of threads
.A latch is a thread coordination mechanism
that allows any number of threads to block
until an expected number of threads arrive at the latch
(via the 
count_down function)
.The expected count is set when the latch is created
.An individual latch is a single-use object;
once the expected count has been reached, the latch cannot be reused
.namespace std {
  class latch {
  public:
    static constexpr ptrdiff_t max() noexcept;
    constexpr explicit latch(ptrdiff_t expected);
    ~latch();
    latch(const latch&) = delete;
    latch& operator=(const latch&) = delete;
    void count_down(ptrdiff_t update = 1);
    bool try_wait() const noexcept;
    void wait() const;
    void arrive_and_wait(ptrdiff_t update = 1);
  private:
    ptrdiff_t counter;  
  };
}
A 
latch maintains an internal counter
that is initialized when the latch is created
.Threads can block on the latch object,
waiting for counter to be decremented to zero
.Concurrent invocations of the member functions of 
latch,
other than its destructor, do not introduce data races
.static constexpr ptrdiff_t max() noexcept;
Returns: The maximum value of 
counter that the implementation supports
. constexpr explicit latch(ptrdiff_t expected);
Preconditions: 
expected >= 0 is 
true and
expected <= max() is 
true. Effects: Initializes 
counter with 
expected. void count_down(ptrdiff_t update = 1);
Preconditions: 
update >= 0 is 
true, and
update <= counter is 
true. Effects: Atomically decrements 
counter by 
update.  If 
counter is equal to zero,
unblocks all threads blocked on 
*this.Synchronization: Strongly happens before the returns from all calls that are unblocked
. bool try_wait() const noexcept;
Returns: With very low probability 
false.  Effects: If 
counter equals zero, returns immediately
.  Otherwise, blocks on 
*this
until a call to 
count_down that decrements 
counter to zero
.void arrive_and_wait(ptrdiff_t update = 1);
Effects: Equivalent to:
count_down(update);
wait();
A barrier is a thread coordination mechanism
whose lifetime consists of a sequence of barrier phases,
where each phase allows at most an expected number of threads to block
until the expected number of threads arrive at the barrier
.[
Note 1: 
A barrier is useful for managing repeated tasks
that are handled by multiple threads
. β 
end note]
namespace std {
  template<class CompletionFunction = see below>
  class barrier {
  public:
    using arrival_token = see below;
    static constexpr ptrdiff_t max() noexcept;
    constexpr explicit barrier(ptrdiff_t expected,
                               CompletionFunction f = CompletionFunction());
    ~barrier();
    barrier(const barrier&) = delete;
    barrier& operator=(const barrier&) = delete;
    arrival_token arrive(ptrdiff_t update = 1);
    void wait(arrival_token&& arrival) const;
    void arrive_and_wait();
    void arrive_and_drop();
  private:
    CompletionFunction completion;      
  };
}
Each 
barrier phase consists of the following steps:
- The expected count is decremented
  by each call to  arrive-  or  arrive_and_drop.
- Exactly once after the expected count reaches zero, a thread
  executes the completion step during its call
  to  arrive- ,  arrive_and_drop- , or  wait- ,
  except that it is  - implementation-defined
  whether the step executes if no thread calls  wait.
- When the completion step finishes,
  the expected count is reset
  to what was specified by the  expected-  argument to the constructor,
  possibly adjusted by calls to  arrive_and_drop- , and
  the next phase starts .
 Threads that arrive at the barrier during the phase
can block on the phase synchronization point by calling 
wait, and
will remain blocked until the phase completion step is run
. The 
phase completion step
that is executed at the end of each phase has the following effects:
- Invokes the completion function, equivalent to  completion().
- Unblocks all threads that are blocked on the phase synchronization point .
The end of the completion step strongly happens before
the returns from all calls that were unblocked by the completion step
.For specializations that do not have
the default value of the 
CompletionFunction template parameter,
the behavior is undefined if any of the barrier object's member functions
other than 
wait are called while the completion step is in progress
.Concurrent invocations of the member functions of 
barrier,
other than its destructor, do not introduce data races
.The member functions 
arrive and 
arrive_and_drop
execute atomically
. is_nothrow_invocable_v<CompletionFunction&> shall be 
true.  The default value of the 
CompletionFunction template parameter is
an unspecified type, such that,
in addition to satisfying the requirements of 
CompletionFunction,
it meets the 
Cpp17DefaultConstructible
requirements (Table 
30) and
completion() has no effects
.static constexpr ptrdiff_t max() noexcept;
Returns: The maximum expected count that the implementation supports
. constexpr explicit barrier(ptrdiff_t expected,
                           CompletionFunction f = CompletionFunction());
Preconditions: 
expected >= 0 is 
true and
expected <= max() is 
true. Effects: Sets both the initial expected count for each barrier phase and
the current expected count for the first phase to 
expected.  Initializes 
completion with 
std::move(f).[
Note 1: 
If 
expected is 0 this object can only be destroyed
. β 
end note]
Throws: Any exception thrown by 
CompletionFunction's move constructor
. arrival_token arrive(ptrdiff_t update = 1);
Preconditions: 
update > 0 is 
true, and
update is less than or equal to
the expected count for the current barrier phase
. Effects: Constructs an object of type 
arrival_token
that is associated with the phase synchronization point for the current phase
.  Then, decrements the expected count by 
update.Synchronization: The call to 
arrive strongly happens before
the start of the phase completion step for the current phase
. Returns: The constructed 
arrival_token object
. [
Note 2: 
This call can cause the completion step for the current phase to start
. β 
end note]
void wait(arrival_token&& arrival) const;
Preconditions: 
arrival is associated with
the phase synchronization point for the current phase or
the immediately preceding phase of the same barrier object
. Effects: Blocks at the synchronization point associated with 
std::move(arrival)
until the phase completion step of the synchronization point's phase is run
.  [
Note 3: 
If 
arrival is associated with the synchronization point
for a previous phase, the call returns immediately
. β 
end note]
Effects: Equivalent to: 
wait(arrive()). Preconditions: The expected count for the current barrier phase is greater than zero
. Effects: Decrements the initial expected count for all subsequent phases by one
.  Then decrements the expected count for the current phase by one
.Synchronization: The call to 
arrive_and_drop strongly happens before
the start of the phase completion step for the current phase
. [
Note 4: 
This call can cause the completion step for the current phase to start
. β 
end note]
[futures] describes components that a C++ program can use to retrieve in one thread the
result (value or exception) from a function that has run in the same thread or another thread
.  [
Note 1: 
These components are not restricted to multi-threaded programs but can be useful in
single-threaded programs as well
. β 
end note]
The 
enum type 
launch is a bitmask type (
[bitmask.types]) with
elements 
launch::async and 
launch::deferred.[
Note 1: 
Implementations can provide bitmasks to specify restrictions on task
interaction by functions launched by 
async() applicable to a
corresponding subset of available launch policies
.Implementations can extend
the behavior of the first overload of 
async() by adding their extensions
to the launch policy under the βas ifβ rule
. β 
end note]
The enum values of 
future_errc are distinct and not zero
.const error_category& future_category() noexcept;
Returns: A reference to an object of a type derived from class 
error_category. The object's 
default_error_condition and 
equivalent virtual functions shall
behave as specified for the class 
error_category.The object's 
name
virtual function returns a pointer to the string 
"future".error_code make_error_code(future_errc e) noexcept;
Returns: 
error_code(static_cast<int>(e), future_category()). error_condition make_error_condition(future_errc e) noexcept;
Returns: 
error_condition(static_cast<int>(e), future_category()). namespace std {
  class future_error : public logic_error {
  public:
    explicit future_error(future_errc e);
    const error_code& code() const noexcept;
    const char*       what() const noexcept;
  private:
    error_code ec_;             
  };
}
 explicit future_error(future_errc e);
Effects: Initializes 
ec_ with 
make_error_code(e). const error_code& code() const noexcept;
const char* what() const noexcept;
Returns: An 
ntbs incorporating 
code().message(). Many of the classes introduced in subclause 
[futures] use some state to communicate results
.This
shared state consists of some state information and some (possibly not
yet evaluated) 
result, which can be a (possibly void) value or an exception
.[
Note 1: 
Futures, promises, and tasks defined in this Clause reference such shared state
. β 
end note]
[
Note 2: 
The result can be any kind of object including a function to compute that result,
as used by 
async when 
policy is 
launch::deferred. β 
end note]
 A 
waiting function of an asynchronous return object is one
that potentially blocks to wait for the shared state to be made
ready
.  The result of a shared state is set by
respective functions on the asynchronous provider
.[
Example 1: 
Promises and tasks are examples of asynchronous providers
. β 
end example]
The means of setting the result of a shared state is specified
in the description of those classes and functions that create such a state object
. When an asynchronous return object or an asynchronous provider is said to release its
shared state, it means:
- if the return object or provider holds the last reference to its shared state,
the shared state is destroyed; and
- the return object or provider gives up its reference to its shared state; and
- these actions will not block for the shared state to become ready, except that it
may block if all of the following are true: the shared state was created by a call to
std::async, the shared state is not yet ready, and this was the last reference
to the shared state.
When an asynchronous provider is said to make its shared state ready, it means:
- first, the provider marks its shared state as ready; and
- second, the provider unblocks any execution agents waiting for its shared
state to become ready.
When an asynchronous provider is said to abandon its shared state, it means:
- first, if that state is not ready, the provider
- stores an exception object of type future_error with an error condition of
broken_promise within its shared state; and then
- makes its shared state ready;
 
- second, the provider releases its shared state.
A shared state is 
ready only if it holds a value or an exception ready for
retrieval
.Waiting for a shared state to become ready may invoke code to compute the result on
the waiting thread if so specified in the description of the class or function that creates
the state object
.Calls to functions that successfully set the stored result of a shared
state 
synchronize with calls to functions
successfully detecting the ready state resulting from that setting
.The storage of the result
(whether normal or exceptional) into the shared state
synchronizes with
the successful return from a call to a waiting function on the shared state
.Some functions (e.g., 
promise::set_value_at_thread_exit) delay making
the shared state ready until the calling thread exits
.The destruction of
each of that thread's objects with 
thread storage duration
is sequenced before making that shared state ready
.Access to the result of the same shared state may 
conflict.[
Note 3: 
This explicitly specifies that the result of the shared state is
visible in the objects that reference this state in the sense of data race
avoidance (
[res.on.data.races])
.For example, concurrent accesses through
references returned by 
shared_future::get() (
[futures.shared.future])
must either use read-only operations or provide additional synchronization
. β 
end note]
namespace std {
  template<class R>
  class promise {
  public:
    promise();
    template<class Allocator>
      promise(allocator_arg_t, const Allocator& a);
    promise(promise&& rhs) noexcept;
    promise(const promise&) = delete;
    ~promise();
    
    promise& operator=(promise&& rhs) noexcept;
    promise& operator=(const promise&) = delete;
    void swap(promise& other) noexcept;
    
    future<R> get_future();
    
    void set_value(see below);
    void set_exception(exception_ptr p);
    
    void set_value_at_thread_exit(see below);
    void set_exception_at_thread_exit(exception_ptr p);
  };
}
 For the primary template, 
R shall be an object type that
meets the 
Cpp17Destructible requirements
.The implementation provides the template 
promise and two specializations,
promise<R&> and 
promise<void>.These differ only in the argument type
of the member functions 
set_value and 
set_value_at_thread_exit,
as set out in their descriptions, below
.The 
set_value, 
set_exception, 
set_value_at_thread_exit,
and 
set_exception_at_thread_exit member functions behave as though
they acquire a single mutex associated with the promise object while updating the
promise object
.promise();
template<class Allocator>
  promise(allocator_arg_t, const Allocator& a);
Effects: Creates a shared state
.  The second
constructor uses the allocator 
a to allocate memory for the shared
state
.promise(promise&& rhs) noexcept;
Effects: Transfers ownership of the shared state
of 
rhs (if any) to the newly-constructed object
. Postconditions: 
rhs has no shared state
. promise& operator=(promise&& rhs) noexcept;
Effects: Abandons any shared state (
[futures.state]) and then as if
promise(std::move(rhs)).swap(*this). void swap(promise& other) noexcept;
Effects: Exchanges the shared state of 
*this and 
other. Postconditions: 
*this has the shared state (if any) that 
other had
prior to the call to 
swap.  other has the shared state (if any) that
*this had prior to the call to 
swap. Synchronization: Calls to this function do not introduce
data races  (
[intro.multithread]) with calls to
set_value,
set_exception,
set_value_at_thread_exit, or
set_exception_at_thread_exit.  [
Note 1: 
Such calls need not synchronize with each other
. β 
end note]
Returns: A 
future<R> object with the same shared state as
*this. Throws: 
future_error if 
*this has no shared state or if
get_future has already been called on a 
promise with the same
shared state as 
*this. Error conditions: 
- future_already_retrieved-  if  get_future-  has already been called on
a  promise-  with the same shared state as  *this.
 
- no_state-  if  *this-  has no shared state .
 
 void promise::set_value(const R& r);
void promise::set_value(R&& r);
void promise<R&>::set_value(R& r);
void promise<void>::set_value();
Effects: Atomically stores the value 
r in the shared state and
makes that state ready (
[futures.state])
. Throws: 
- future_error if its shared state
already has a stored value or exception, or
- for the first version, any exception thrown by the constructor selected to copy an object of R, or
- for the second version, any exception thrown by the constructor selected to move an object of R.
 Error conditions: 
- promise_already_satisfied-  if its shared state
already has a stored value or exception .
 
- no_state-  if  *this-  has no shared state .
 
 void set_exception(exception_ptr p);
Preconditions: 
p is not null
. Effects: Atomically stores the exception pointer 
p in the shared state
and makes that state ready (
[futures.state])
. Throws: 
future_error if its shared state
already has a stored value or exception
. Error conditions: 
- promise_already_satisfied-  if its shared state
already has a stored value or exception .
 
- no_state-  if  *this-  has no shared state .
 
 void promise::set_value_at_thread_exit(const R& r);
void promise::set_value_at_thread_exit(R&& r);
void promise<R&>::set_value_at_thread_exit(R& r);
void promise<void>::set_value_at_thread_exit();
Effects: Stores the value 
r in the shared state without making that
state ready immediately
.  Schedules that state to be made ready when the current
thread exits, after all objects with thread storage duration associated with the
current thread have been destroyed
.Throws: 
- future_error if its shared state
already has a stored value or exception, or
- for the first version, any exception thrown by the constructor selected to copy an object of R, or
- for the second version, any exception thrown by the constructor selected to move an object of R.
 Error conditions: 
- promise_already_satisfied-  if its shared state
already has a stored value or exception .
 
- no_state-  if  *this-  has no shared state .
 
 void set_exception_at_thread_exit(exception_ptr p);
Preconditions: 
p is not null
. Effects: Stores the exception pointer 
p in the shared state without
making that state ready immediately
.  Schedules that state to be made ready when
the current thread exits, after all objects with thread storage duration
associated with the current thread have been destroyed
.Throws: 
future_error if an error condition occurs
. Error conditions: 
- promise_already_satisfied-  if its shared state
already has a stored value or exception .
 
- no_state-  if  *this-  has no shared state .
 
 template<class R>
  void swap(promise<R>& x, promise<R>& y) noexcept;
Effects: As if by 
x.swap(y). The class template 
future defines a type for asynchronous return objects which
do not share their shared state with other asynchronous return objects
.A default-constructed 
future object has no
shared state
.A 
future object with shared state can be created by
functions on 
asynchronous providers
or by the move constructor and shares its shared state with
the original asynchronous provider
.The result (value or exception) of
a 
future object
can be
set by
calling a respective function on an
object that shares the same
shared state
.[
Note 1: 
Member functions of 
future do not synchronize with themselves or with
member functions of 
shared_future. β 
end note]
The effect of calling any member function other than the destructor, the
move assignment operator, 
share, or 
valid on a 
future object for which
valid() == false
is undefined
.[
Note 2: 
It is valid to move from a future object for which 
valid() == false. β 
end note]
Recommended practice: Implementations should detect this case and throw an object of type
future_error with an error condition of 
future_errc::no_state. namespace std {
  template<class R>
  class future {
  public:
    future() noexcept;
    future(future&&) noexcept;
    future(const future&) = delete;
    ~future();
    future& operator=(const future&) = delete;
    future& operator=(future&&) noexcept;
    shared_future<R> share() noexcept;
    
    see below get();
    
    bool valid() const noexcept;
    void wait() const;
    template<class Rep, class Period>
      future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const;
    template<class Clock, class Duration>
      future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const;
  };
}
 For the primary template, 
R shall be an object type that
meets the 
Cpp17Destructible requirements
.The implementation provides the template 
future and two specializations,
future<R&> and 
future<void>.These differ only in the return type and return
value of the member function 
get, as set out in its description, below
.Effects: The object does not refer to a shared state
. Postconditions: 
valid() == false. future(future&& rhs) noexcept;
Effects: Move constructs a 
future object that refers to the shared
state that
was originally referred to by 
rhs (if any)
. Postconditions: 
- valid()-  returns the same value as  rhs.valid()-  prior to the
constructor invocation .
 
 future& operator=(future&& rhs) noexcept;
Effects: If 
addressof(rhs) == this is 
true, there are no effects
.  Otherwise:
- move assigns the contents of  rhs-  to  *this.
Postconditions: 
- valid()-  returns the same value as  rhs.valid()-  prior to the
assignment .
 
- If  addressof(rhs) == this-  is  false- ,
 rhs.valid() == false.
 shared_future<R> share() noexcept;
Postconditions: 
valid() == false. Returns: 
shared_future<R>(std::move(*this)). R future::get();
R& future<R&>::get();
void future<void>::get();
[
Note 3: 
As described above, the template and its two required specializations differ only in
the return type and return value of the member function 
get. β 
end note]
Effects: 
- wait()s until the shared state is ready, then retrieves the
value stored in the shared state;
- releases any shared state ([futures.state]).
 Postconditions: 
valid() == false. Returns: 
- future::get()-  returns the value  v-  stored in the object's shared state as
 std::move(v).
 
- future<R&>::get()-  returns the reference stored as value in the object's shared state .
 
- future<void>::get()-  returns nothing .
 
 Throws: The stored exception, if an exception was stored in the shared state
. bool valid() const noexcept;
Returns: 
true only if 
*this refers to a shared state
. Effects: Blocks until the shared state is ready
. template<class Rep, class Period>
  future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const;
Effects: None if the shared state contains a deferred function (
[futures.async]),
otherwise
blocks until the shared state is ready or until
the relative timeout (
[thread.req.timing]) specified by 
rel_time has expired
. Returns: 
- future_status::deferred-  if the shared state contains a deferred
function .
 
- future_status::ready-  if the shared state is ready .
 
- future_status::timeout-  if the function is returning because the
relative timeout ( [thread.req.timing]- )
specified by  rel_time-  has expired .
 
 template<class Clock, class Duration>
  future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const;
Effects: None if the shared state contains a deferred function (
[futures.async]),
otherwise
blocks until the shared state is ready or until
the absolute timeout (
[thread.req.timing]) specified by 
abs_time has expired
. Returns: 
- future_status::deferred-  if the shared state contains a deferred
function .
 
- future_status::ready-  if the shared state is ready .
 
- future_status::timeout-  if the function is returning because the
absolute timeout ( [thread.req.timing]- )
specified by  abs_time-  has expired .
 
 The class template 
shared_future defines a type for asynchronous return objects
which may share their shared state with other asynchronous return
objects
.A default-constructed 
shared_future
object has no shared state
.A 
shared_future object with
shared state can
be created
by conversion from a 
future object and shares its shared state with the
original 
asynchronous provider of the shared state
.The result (value or exception) of a 
shared_future object
can be set by
calling a respective function on an
object that shares the same shared state
.[
Note 1: 
Member functions of 
shared_future do not synchronize with themselves,
but they synchronize with the shared state
. β 
end note]
The effect of calling any member function other than the destructor,
the move assignment operator, the copy assignment operator, or
valid() on a 
shared_future object for which 
valid() == false is undefined
.[
Note 2: 
It is valid to copy or move from a 
shared_future
object for which 
valid() is 
false. β 
end note]
Recommended practice: Implementations should detect this case and throw an object of type
future_error with an error condition of 
future_errc::no_state. namespace std {
  template<class R>
  class shared_future {
  public:
    shared_future() noexcept;
    shared_future(const shared_future& rhs) noexcept;
    shared_future(future<R>&&) noexcept;
    shared_future(shared_future&& rhs) noexcept;
    ~shared_future();
    shared_future& operator=(const shared_future& rhs) noexcept;
    shared_future& operator=(shared_future&& rhs) noexcept;
    
    see below get() const;
    
    bool valid() const noexcept;
    void wait() const;
    template<class Rep, class Period>
      future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const;
    template<class Clock, class Duration>
      future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const;
  };
}
 For the primary template, 
R shall be an object type that
meets the 
Cpp17Destructible requirements
.The implementation provides the template 
shared_future and two
specializations, 
shared_future<R&> and 
shared_future<void>.These
differ only in the return type and return value of the member function 
get, as
set out in its description, below
.shared_future() noexcept;
Effects: The object does not refer to a shared state
. Postconditions: 
valid() == false. shared_future(const shared_future& rhs) noexcept;
Effects: The object refers to the same
shared state as 
rhs (if any)
. Postconditions: 
valid() returns the same value as 
rhs.valid(). shared_future(future<R>&& rhs) noexcept;
shared_future(shared_future&& rhs) noexcept;
Effects: Move constructs a 
shared_future object that refers to the
shared state that was originally referred to by 
rhs (if any)
. Postconditions: 
- valid()-  returns the same value as  rhs.valid()-  returned prior to
the constructor invocation .
 
 shared_future& operator=(shared_future&& rhs) noexcept;
Effects: If 
addressof(rhs) == this is 
true, there are no effects
.  Otherwise:
- Releases any shared state ([futures.state]);
- move assigns the contents of rhs to *this.
Postconditions: 
- valid()-  returns the same value as  rhs.valid()-  returned prior to
the assignment .
 
- If  addressof(rhs) == this-  is  false- ,
 rhs.valid() == false.
 shared_future& operator=(const shared_future& rhs) noexcept;
Effects: If 
addressof(rhs) == this is 
true, there are no effects
.  Otherwise:
- Releases any shared state ([futures.state]);
- assigns the contents of rhs to *this. [ Note 3:  As a result,
 *this refers to the same shared state as  rhs
(if any) .
 β  end note] 
Postconditions: 
valid() == rhs.valid(). const R& shared_future::get() const;
R& shared_future<R&>::get() const;
void shared_future<void>::get() const;
[
Note 4: 
As described above, the template and its two required specializations differ only in
the return type and return value of the member function 
get. β 
end note]
[
Note 5: 
Access to a value object stored in the shared state is
unsynchronized, so operations on 
R might
introduce a data race (
[intro.multithread])
. β 
end note]
Effects: 
wait()s until the shared state is ready, then retrieves the
value stored in the shared state
. Returns: 
- shared_future::get()-  returns a const reference to the value stored in the object's
shared state .
 - [ Note 6- :  - Access through that reference after the shared state has been
destroyed produces undefined behavior; this can be avoided by not storing the reference in any
storage with a greater lifetime than the  shared_future-  object that returned the
reference .
-  β  end note- ] 
- shared_future<R&>::get()-  returns the reference stored as value in the object's
shared state .
 
- shared_future<void>::get()-  returns nothing .
 
 Throws: The stored exception, if an exception was stored in the shared state
. bool valid() const noexcept;
Returns: 
true only if 
*this refers to a shared state
. Effects: Blocks until the shared state is ready
. template<class Rep, class Period>
  future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const;
Effects: None if the shared state contains a deferred function (
[futures.async]),
otherwise
blocks until the shared state is ready or until
the relative timeout (
[thread.req.timing]) specified by
rel_time has expired
. Returns: 
- future_status::deferred-  if the shared state contains a deferred
function .
 
- future_status::ready-  if the shared state is ready .
 
- future_status::timeout-  if the function is returning because the
relative timeout ( [thread.req.timing]- )
specified by  rel_time-  has expired .
 
 template<class Clock, class Duration>
  future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const;
Effects: None if the shared state contains a deferred function (
[futures.async]),
otherwise
blocks until the shared state is ready or until the
absolute timeout (
[thread.req.timing]) specified by
abs_time has expired
. Returns: 
- future_status::deferred-  if the shared state contains a deferred
function .
 
- future_status::ready-  if the shared state is ready .
 
- future_status::timeout-  if the function is returning because the
absolute timeout ( [thread.req.timing]- )
specified by  abs_time-  has expired .
 
 The function template 
async provides a mechanism to launch a function potentially
in a new thread and provides the result of the function in a 
future object with which
it shares a shared state
.template<class F, class... Args>
  future<invoke_result_t<decay_t<F>, decay_t<Args>...>>
    async(F&& f, Args&&... args);
template<class F, class... Args>
  future<invoke_result_t<decay_t<F>, decay_t<Args>...>>
    async(launch policy, F&& f, Args&&... args);
Mandates: The following are all 
true:
- is_constructible_v<decay_t<F>, F>,
- (is_constructible_v<decay_t<Args>, Args> && ...), and
- is_invocable_v<decay_t<F>, decay_t<Args>...>.
 Effects: The first function
behaves the same as a call to the second function with a 
policy argument of
launch::async | launch::deferred
and the same arguments for 
F and 
Args.  The second function creates a shared state that is associated with
the returned 
future object
.The further behavior
of the second function depends on the 
policy argument as follows (if
more than one of these conditions applies, the implementation may choose any of
the corresponding policies):
- If  launch::async-  is set in  policy- , calls
 invoke(auto(std::forward<F>(f)), auto(std::forward<Args>(args))...)-  ( [func.invoke]- ,  [thread.thread.constr]- )
as if in a new thread of execution represented by a  thread-  object
with the values produced by  auto- 
being materialized ( [conv.rval]- ) in the thread that called  async.
- Any return value
is stored as the result in the
shared state .
- Any exception propagated from
the execution of
 invoke(auto(std::forward<F>(f)), auto(std::forward<Args>(args))...)- 
is stored as the exceptional result in the shared state .
- The  thread-  object is
stored in the shared state
and affects the behavior of any asynchronous return objects that
reference that state .
- If  launch::deferred-  is set in  policy- ,
stores  auto(std::forward<F>(f))-  and
 auto(std::forward<Args>(args))...- 
in the shared state .
- Invocation of the deferred function evaluates
 invoke(std::move(g), std::move(xyz))-  where  g-  is the stored value of
 auto(std::forward<F>(f))-  and  xyz-  is the stored copy of
 auto(std::forward<Args>(args))....
- Any return value is stored
as the result in the shared state .
- Any exception propagated
from the execution
of the deferred function
is stored as the exceptional result
in the shared state .
- The shared state is not
made ready until the function has completed .
- The first call to a
non-timed waiting function ( [futures.state]- )
on an asynchronous return object referring to
this shared state invokes the
deferred function in the thread that called the waiting function .
- Once evaluation of  invoke(std::move(g), std::move(xyz))-  begins, the function is no longer
considered deferred .
- Recommended practice- : If this policy is specified together with other policies, such as when using a
 policy-  value of  launch::async | launch::deferred- , implementations should defer
invocation or the selection of the policy when no more concurrency can be effectively
exploited .
 
- If no value is set in the launch policy, or a value is set that is neither specified
in this document nor by the implementation, the behavior is undefined .
Synchronization: The invocation of 
async synchronizes with the invocation of 
f.  The completion of the function 
f is sequenced before
the shared state is made ready
.[
Note 1: 
These apply regardless of the provided 
policy argument, and
even if the corresponding 
future object is moved to another thread
.However, it is possible for 
f not to be called at all,
in which case its completion never happens
. β 
end note]
If the implementation chooses the 
launch::async policy,
- a call to a waiting function on an asynchronous return
object that shares the shared state created by this async call shall
block until the associated thread has completed, as if joined, or else time
out ([thread.thread.member]);
- the associated thread completion
synchronizes with
the return from
the first function
that successfully detects the ready status of the shared state or
with the return from the last
function that releases the shared state, whichever
happens first.
Returns: An object of type
future<invoke_result_t<decay_t<F>, decay_t<Args>...>> that refers
to the shared state created by this call to 
async.  [
Note 2: 
If a future obtained from 
async is moved outside the local scope,
the future's destructor can block for the shared state to become ready
. β 
end note]
Throws: 
system_error if 
policy == launch::async and the
implementation is unable to start a new thread, or
std::bad_alloc if memory for the internal data structures
cannot be allocated
. Error conditions: 
- resource_unavailable_try_again β if
policy == launch::async and the system is unable to start a new thread.
 [
Example 1: 
int work1(int value);
int work2(int value);
int work(int value) {
  auto handle = std::async([=]{ return work2(value); });
  int tmp = work1(value);
  return tmp + handle.get();    
}
[
Note 3: 
Line #1 might not result in concurrency because
the 
async call uses the default policy, which might use
launch::deferred, in which case the lambda might not be invoked until the
get() call; in that case, 
work1 and 
work2 are called on the
same thread and there is no concurrency
. β 
end note]
 β 
end example]
The class template 
packaged_task defines a type for wrapping a function or
callable object so that the return value of the function or callable object is stored in
a future when it is invoked
.When the 
packaged_task object is invoked, its stored task is invoked and the
result (whether normal or exceptional) stored in the shared state
.Any futures that
share the shared state will then be able to access the stored result
.namespace std {
  template<class> class packaged_task;  
  template<class R, class... ArgTypes>
  class packaged_task<R(ArgTypes...)> {
  public:
    
    packaged_task() noexcept;
    template<class F>
      explicit packaged_task(F&& f);
    template<class F, class Allocator>
      explicit packaged_task(allocator_arg_t, const Allocator& a, F&& f);
    ~packaged_task();
    
    packaged_task(const packaged_task&) = delete;
    packaged_task& operator=(const packaged_task&) = delete;
    
    packaged_task(packaged_task&& rhs) noexcept;
    packaged_task& operator=(packaged_task&& rhs) noexcept;
    void swap(packaged_task& other) noexcept;
    bool valid() const noexcept;
    
    future<R> get_future();
    
    void operator()(ArgTypes... );
    void make_ready_at_thread_exit(ArgTypes...);
    void reset();
  };
  template<class R, class... ArgTypes>
    packaged_task(R (*)(ArgTypes...)) -> packaged_task<R(ArgTypes...)>;
  template<class F> packaged_task(F) -> packaged_task<see below>;
}
 packaged_task() noexcept;
Effects: The object has no shared state and no stored task
. template<class F>
  explicit packaged_task(F&& f);
Effects: Equivalent to
packaged_task(allocator_arg, allocator<int>(), std::forward<F>(f)). template<class F, class Allocator>
  explicit packaged_task(allocator_arg_t, const Allocator& a, F&& f);
Constraints: 
remove_cvref_t<F>
is not the same type as 
packaged_task<R(ArgTypes...)>. Mandates: 
is_invocable_r_v<R, decay_t<F>&, ArgTypes...> is 
true. Effects: Let 
A2 be
allocator_traits<Allocator>::rebind_alloc<unspecified>
and let 
a2 be an object of type 
A2 initialized with
A2(a).  Constructs a new 
packaged_task object with
a stored task of type 
decay_t<F> and a shared state
.Initializes the object's stored task with 
std::forward<F>(f).Uses 
a2 to allocate storage for the shared state and stores a copy
of 
a2 in the shared state
.Throws: Any exceptions thrown by the initialization of the stored task
.  If storage for the shared state cannot be allocated, any exception thrown by
A2::allocate.template<class F> packaged_task(F) -> packaged_task<see below>;
Constraints: 
&F::operator() is well-formed when
treated as an unevaluated operand (
[expr.context]) and either
- F::operator() is a non-static member function and
decltype(&F::operator()) is either of the form
R(G::*)(A...) cv &opt noexceptopt
or of the form
R(*)(G, A...) noexceptopt
for a type G, or
- F::operator() is a static member function and
decltype(&F::operator()) is of the form
R(*)(A...) noexceptopt.
 Remarks: The deduced type is 
packaged_task<R(A...)>. packaged_task(packaged_task&& rhs) noexcept;
Effects: Transfers ownership of
rhs's shared state to 
*this, leaving 
rhs with no
shared state
.  Moves the stored task from 
rhs to 
*this.Postconditions: 
rhs has no shared state
. packaged_task& operator=(packaged_task&& rhs) noexcept;
Effects: 
- Releases any shared state ([futures.state]);
- calls packaged_task(std::move(rhs)).swap(*this).
 void swap(packaged_task& other) noexcept;
Effects: Exchanges the shared states and stored tasks of 
*this and 
other. Postconditions: 
*this has the same shared state
and stored task (if any) as 
other
prior to the call to 
swap.  other has the same shared state
and stored task (if any)
as 
*this prior to the call to 
swap. bool valid() const noexcept;
Returns: 
true only if 
*this has a shared state
. Synchronization: Calls to this function do not introduce
data races  (
[intro.multithread]) with calls to
operator() or
make_ready_at_thread_exit.  [
Note 1: 
Such calls need not synchronize with each other
. β 
end note]
Returns: A 
future object that shares the same shared state as 
*this. Throws: A 
future_error object if an error occurs
. Error conditions: 
- future_already_retrieved-  if  get_future-  has already been called on
a  packaged_task-  object with the same shared state as  *this.
 
- no_state-  if  *this-  has no shared state .
 
 void operator()(ArgTypes... args);
Effects: As if by 
INVOKE<R>(f, t1, t2, …, tN) (
[func.require]),
where 
f is the
stored task of 
*this and
t1, t2, …, tN are the values in 
args....  If the task returns normally,
the return value is stored as the asynchronous result in the shared state of
*this, otherwise the exception thrown by the task is stored
.The
shared state of 
*this is made ready, and any threads blocked in a
function waiting for
the shared state of 
*this to become ready are unblocked
.Throws: A 
future_error exception object if there is no shared
state or the stored task has already been invoked
. Error conditions: 
- promise_already_satisfied-  if
the stored task has already been invoked .
 
- no_state-  if  *this-  has no shared state .
 
 void make_ready_at_thread_exit(ArgTypes... args);
Effects: As if by 
INVOKE<R>(f, t1, t2, …, tN) (
[func.require]),
where 
f is the stored task and
t1, t2, …, tN are the values in 
args....  If the task returns normally,
the return value is stored as the asynchronous result in the shared state of
*this, otherwise the exception thrown by the task is stored
.In either
case, this is done without making that state ready (
[futures.state]) immediately
.Schedules
the shared state to be made ready when the current thread exits,
after all objects with thread storage duration associated with the current thread
have been destroyed
.Throws: 
future_error if an error condition occurs
. Error conditions: 
- promise_already_satisfied-  if the
stored task has already been invoked .
 
- no_state-  if  *this-  has no shared state .
 
 Effects: Equivalent to:
if (!valid()) {
  throw future_error(future_errc::no_state);
}
*this = packaged_task(allocator_arg, a, std::move(f));
where
f is the task stored in
*this and 
a is the allocator stored in the shared state
.  [
Note 2: 
This constructs a new shared state for 
*this. β 
end note]
Throws: 
- Any exception thrown by the  packaged_task-  constructor .
- future_error-  with an error condition of  no_state-  if  *this- 
has no shared state .
 
 template<class R, class... ArgTypes>
  void swap(packaged_task<R(ArgTypes...)>& x, packaged_task<R(ArgTypes...)>& y) noexcept;
Effects: As if by 
x.swap(y). Subclause 
[saferecl] contains safe-reclamation techniques, which are most
frequently used to straightforwardly resolve access-deletion races
.RCU is a synchronization mechanism
that can be used for linked data structures
that are frequently read, but seldom updated
.RCU does not provide mutual exclusion,
but instead allows the user to schedule specified actions
such as deletion at some later time
.A class type 
T is 
rcu-protectable
if it has exactly one base class of type 
rcu_obj_base<T, D>
for some 
D, and that base is public and non-virtual, and
it has no base classes of type 
rcu_obj_base<X, Y>
for any other combination 
X, 
Y.An object is rcu-protectable if it is of rcu-protectable type
.An invocation of 
unlock U on an 
rcu_domain dom
corresponds to an invocation of 
lock L on 
dom
if 
L is sequenced before 
U and either
- no other invocation of lock on dom
is sequenced after L and before U, or
- every invocation of unlock U2 on dom
such that L is sequenced before U2 and U2 is sequenced before U
corresponds to an invocation of lock L2 on dom
such that L is sequenced before L2 and L2 is sequenced before U2.
[
Note 1: 
This pairs nested locks and unlocks on a given domain in each thread
. β 
end note]
Given a region of RCU protection 
R on a domain 
dom and
given an evaluation 
E that scheduled another evaluation 
F in 
dom,
if 
E does not strongly happen before the start of 
R,
the end of 
R strongly happens before evaluating 
F.The evaluation of a scheduled evaluation is potentially concurrent with
any other scheduled evaluation
.Each scheduled evaluation is evaluated at most once
.Objects of type 
T to be protected by RCU inherit from
a specialization 
rcu_obj_base<T, D> for some 
D.namespace std {
  template<class T, class D = default_delete<T>>
  class rcu_obj_base {
  public:
    void retire(D d = D(), rcu_domain& dom = rcu_default_domain()) noexcept;
  protected:
    rcu_obj_base() = default;
    rcu_obj_base(const rcu_obj_base&) = default;
    rcu_obj_base(rcu_obj_base&&) = default;
    rcu_obj_base& operator=(const rcu_obj_base&) = default;
    rcu_obj_base& operator=(rcu_obj_base&&) = default;
    ~rcu_obj_base() = default;
  private:
    D deleter;            
  };
}
The behavior of a program that adds specializations for 
rcu_obj_base
is undefined
.T may be an incomplete type
.  It shall be complete before any member of the resulting specialization of
rcu_obj_base is referenced
.D shall be a
function object type (
[function.objects]) for which,
given a value 
d of type 
D and
a value 
ptr of type 
T*,
the expression 
d(ptr) is valid
. If 
D is trivially copyable,
all specializations of 
rcu_obj_base<T, D> are trivially copyable
.void retire(D d = D(), rcu_domain& dom = rcu_default_domain()) noexcept;
Mandates: 
T is an rcu-protectable type
. Preconditions: 
*this is
a base class subobject of an object 
x of type 
T.  The member function 
rcu_obj_base<T, D>::retire
was not invoked on 
x before
.The assignment to 
deleter does not exit via an exception
.Effects: Evaluates 
deleter = std::move(d) and
schedules the evaluation of
the expression 
deleter(
addressof(x))
in the domain 
dom;
the behavior is undefined if that evaluation exits via an exception
.  May invoke scheduled evaluations in 
dom.[
Note 1: 
If such evaluations acquire resources held across any invocation of
retire on 
dom, deadlock can occur
. β 
end note]
namespace std {
  class rcu_domain {
  public:
    rcu_domain(const rcu_domain&) = delete;
    rcu_domain& operator=(const rcu_domain&) = delete;
    void lock() noexcept;
    bool try_lock() noexcept;
    void unlock() noexcept;
  };
}
This class meets the requirements of
Cpp17Lockable (
[thread.req.lockable.req]) and
provides regions of RCU protection
.[
Example 1: 
std::scoped_lock<rcu_domain> rlock(rcu_default_domain());
 β 
end example]
The functions 
lock and 
unlock establish
(possibly nested) regions of RCU protection
.Effects: Opens a region of RCU protection
. Remarks: Calls to 
lock
do not introduce a data race (
[intro.races]) involving 
*this. bool try_lock() noexcept;
Effects: Equivalent to 
lock(). Preconditions: A call to 
lock
that opened an unclosed region of RCU protection
is sequenced before the call to 
unlock. Effects: Closes the unclosed region of RCU protection
that was most recently opened
.  May invoke scheduled evaluations in 
*this.[
Note 1: 
If such evaluations acquire resources
held across any invocation of 
unlock on 
*this,
deadlock can occur
. β 
end note]
Remarks: Calls to 
unlock do not introduce a data race involving 
*this.  [
Note 2: 
Evaluation of scheduled evaluations can still cause a data race
. β 
end note]
rcu_domain& rcu_default_domain() noexcept;
Returns: A reference to a static-duration object of type 
rcu_domain.  A reference to the same object is returned every time this function is called
.void rcu_synchronize(rcu_domain& dom = rcu_default_domain()) noexcept;
Effects: If the call to 
rcu_synchronize does not strongly happen before
the lock opening an RCU protection region 
R on 
dom,
blocks until the 
unlock closing 
R happens
. Synchronization: The 
unlock closing 
R
strongly happens before the return from 
rcu_synchronize. void rcu_barrier(rcu_domain& dom = rcu_default_domain()) noexcept;
Effects: May evaluate any scheduled evaluations in 
dom.  For any evaluation that happens before the call to 
rcu_barrier and
that schedules an evaluation 
E in 
dom,
blocks until 
E has been evaluated
.Synchronization: The evaluation of any such 
E
strongly happens before the return from 
rcu_barrier. [
Note 1: 
A call to 
rcu_barrier does not imply
a call to 
rcu_synchronize and vice versa
. β 
end note]
template<class T, class D = default_delete<T>>
void rcu_retire(T* p, D d = D(), rcu_domain& dom = rcu_default_domain());
Mandates: 
is_move_constructible_v<D> is 
true and
the expression 
d(p) is well-formed
. Effects: May allocate memory
.  It is unspecified whether the memory allocation
is performed by invoking 
operator new.Initializes an object 
d1 of type 
D from 
std::move(d).Schedules the evaluation of 
d1(p) in the domain 
dom;
the behavior is undefined if that evaluation exits via an exception
.May invoke scheduled evaluations in 
dom.[
Note 2: 
If 
rcu_retire exits via an exception, no evaluation
is scheduled
. β 
end note]
Throws: 
bad_alloc or any exception thrown by the initialization of 
d1. [
Note 3: 
If scheduled evaluations acquire resources
held across any invocation of 
rcu_retire on 
dom,
deadlock can occur
. β 
end note]
A hazard pointer is a single-writer multi-reader pointer
that can be owned by at most one thread at any time
.Only the owner of the hazard pointer can set its value,
while any number of threads may read its value
.The owner thread sets the value of a hazard pointer to point to an object
in order to indicate to concurrent threadsβwhich
may delete such an objectβthat
the object is not yet safe to delete
.A class type 
T is 
hazard-protectable
if it has exactly one base class of type 
hazard_pointer_obj_base<T, D>
for some 
D,
that base is public and non-virtual, and
it has no base classes of type 
hazard_pointer_obj_base<T2, D2>
for any other combination 
T2, 
D2. Upon creation, a hazard pointer is unassociated
.Changing the association (possibly to the same object)
initiates a new protection epoch and ends the preceding one
. An object 
x of hazard-protectable type 
T is
retired with a deleter of type 
D
when the member function 
hazard_pointer_obj_base<T, D>::retire
is invoked on 
x.Any given object 
x shall be retired at most once
.A retired object 
x is 
reclaimed
by invoking its deleter with a pointer to 
x;
the behavior is undefined if that invocation exits via an exception
.A hazard-protectable object 
x is 
possibly-reclaimable
with respect to an evaluation 
A if
- x is not reclaimed; and
- x is retired in an evaluation R and
A does not happen before R; and
- for all hazard pointers h and for every protection epoch E of h
during which h is associated with x:
- if the beginning of E happens before R,
the end of E strongly happens before A; and
- if E began by an evaluation of try_protect with argument src,
label its atomic load operation L. If there exists an atomic modification  B on  src
such that  L observes a modification that is modification-ordered before  B, and
 B happens before  x is retired,
the end of  E strongly happens before  A.
[ Note 1:  In typical use, a store to  src sequenced before retiring  x
will be such an atomic operation  B.
 β  end note] 
 [ Note 2:  The latter two conditions convey the informal notion
that a protection epoch that began before retiring  x,
as implied either by the happens-before relation or
the coherence order of some source,
delays the reclamation of  x.
 β  end note] 
The number of possibly-reclaimable objects has an unspecified bound
.[
Note 3: 
The bound can be a function of the number of hazard pointers,
the number of threads that retire objects, and
the number of threads that use hazard pointers
. β 
end note]
[
Example 1: 
The following example shows how hazard pointers allow updates to be carried out
in the presence of concurrent readers
.The object of type 
hazard_pointer in 
print_name
protects the object 
*ptr from being reclaimed by 
ptr->retire
until the end of the protection epoch
. β 
end example]
namespace std {
  template<class T, class D = default_delete<T>>
  class hazard_pointer_obj_base {
  public:
    void retire(D d = D()) noexcept;
  protected:
    hazard_pointer_obj_base() = default;
    hazard_pointer_obj_base(const hazard_pointer_obj_base&) = default;
    hazard_pointer_obj_base(hazard_pointer_obj_base&&) = default;
    hazard_pointer_obj_base& operator=(const hazard_pointer_obj_base&) = default;
    hazard_pointer_obj_base& operator=(hazard_pointer_obj_base&&) = default;
    ~hazard_pointer_obj_base() = default;
  private:
    D deleter;      
  };
}
D shall be a function object type (
[func.require])
for which, given a value 
d of type 
D and
a value 
ptr of type 
T*,
the expression 
d(ptr) is valid
. The behavior of a program
that adds specializations for 
hazard_pointer_obj_base is undefined
.T may be an incomplete type
.  It shall be complete before any member
of the resulting specialization of 
hazard_pointer_obj_base
is referenced
.void retire(D d = D()) noexcept;
Mandates: 
T is a hazard-protectable type
. Preconditions: 
*this is
a base class subobject of an object 
x of type 
T.   Move-assigning 
d to 
deleter does not exit via an exception
.Effects: Move-assigns 
d to 
deleter,
thereby setting it as the deleter of 
x,
then retires 
x.  May reclaim possibly-reclaimable objects
.namespace std {
  class hazard_pointer {
  public:
    hazard_pointer() noexcept;
    hazard_pointer(hazard_pointer&&) noexcept;
    hazard_pointer& operator=(hazard_pointer&&) noexcept;
    ~hazard_pointer();
    bool empty() const noexcept;
    template<class T> T* protect(const atomic<T*>& src) noexcept;
    template<class T> bool try_protect(T*& ptr, const atomic<T*>& src) noexcept;
    template<class T> void reset_protection(const T* ptr) noexcept;
    void reset_protection(nullptr_t = nullptr) noexcept;
    void swap(hazard_pointer&) noexcept;
  };
}
An object of type 
hazard_pointer is either empty or
owns a hazard pointer
.Each hazard pointer is owned by
exactly one object of type 
hazard_pointer.[
Note 1: 
An empty 
hazard_pointer object is different from
a 
hazard_pointer object
that owns an unassociated hazard pointer
.An empty 
hazard_pointer object does not own any hazard pointers
. β 
end note]
hazard_pointer() noexcept;
Postconditions: 
*this is empty
. hazard_pointer(hazard_pointer&& other) noexcept;
Postconditions: If 
other is empty, 
*this is empty
.  Otherwise,
*this owns the hazard pointer originally owned by 
other;
other is empty
.Effects: If 
*this is not empty,
destroys the hazard pointer owned by 
*this,
thereby ending its current protection epoch
. hazard_pointer& operator=(hazard_pointer&& other) noexcept;
Effects: If 
this == &other is 
true, no effect
.  Otherwise, if 
*this is not empty,
destroys the hazard pointer owned by 
*this,
thereby ending its current protection epoch
.Postconditions: If 
other was empty, 
*this is empty
.  Otherwise, 
*this owns the hazard pointer originally
owned by 
other.If 
this != &other is 
true, 
other is empty
.bool empty() const noexcept;
Returns: 
true if and only if 
*this is empty
. template<class T> T* protect(const atomic<T*>& src) noexcept;
Effects: Equivalent to:
T* ptr = src.load(memory_order::relaxed);
while (!try_protect(ptr, src)) {}
return ptr;
template<class T> bool try_protect(T*& ptr, const atomic<T*>& src) noexcept;
Mandates: 
T is a hazard-protectable type
. Preconditions: 
*this is not empty
. Effects: Performs the following steps in order:
- Initializes a variable  old-  of type  T*-  with the value of  ptr.
- Evaluates  reset_protection(old).
- Assigns the value of  src.load(memory_order::acquire)-  to  ptr.
- If  old == ptr-  is  false- ,
evaluates  reset_protection().
 template<class T> void reset_protection(const T* ptr) noexcept;
Mandates: 
T is a hazard-protectable type
. Preconditions: 
*this is not empty
. Effects: If 
ptr is a null pointer value, invokes 
reset_protection().  Otherwise,
associates the hazard pointer owned by 
*this with 
*ptr,
thereby ending the current protection epoch
.void reset_protection(nullptr_t = nullptr) noexcept;
Preconditions: 
*this is not empty
. Postconditions: The hazard pointer owned by 
*this is unassociated
. void swap(hazard_pointer& other) noexcept;
Effects: Swaps the hazard pointer ownership of this object with that of 
other.  [
Note 1: 
The owned hazard pointers, if any, remain unchanged during the swap and
continue to be associated with the respective objects
that they were protecting before the swap, if any
.No protection epochs are ended or initiated
. β 
end note]
hazard_pointer make_hazard_pointer();
Effects: Constructs a hazard pointer
. Returns: A 
hazard_pointer object that owns the newly-constructed hazard pointer
. Throws: May throw 
bad_alloc
if memory for the hazard pointer could not be allocated
. void swap(hazard_pointer& a, hazard_pointer& b) noexcept;
Effects: Equivalent to 
a.swap(b).