https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71945

Jonathan Wakely <redi at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
   Target Milestone|---                         |16.0
           Assignee|unassigned at gcc dot gnu.org      |redi at gcc dot gnu.org
             Status|NEW                         |ASSIGNED

--- Comment #3 from Jonathan Wakely <redi at gcc dot gnu.org> ---
(In reply to Jonathan Wakely from comment #2)
> We should terminate if the counter reaches its maximum value.

That would mean changing from __atomic_add_dispatch to using
__exchange_add_dispatch, so that we get the old value:

      void
      _M_add_ref_copy()
      { __gnu_cxx::__atomic_add_dispatch(&_M_use_count, 1); }

      // Increment the weak count.
      void
      _M_weak_add_ref() noexcept
      { __gnu_cxx::__atomic_add_dispatch(&_M_weak_count, 1); }

For most targets that doesn't make any difference, because they both use the
same implementation anyway:

  _Atomic_word
  __attribute__ ((__unused__))
  __exchange_and_add(volatile _Atomic_word* __mem, int __val) throw ()
  { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); }

  void
  __attribute__ ((__unused__))
  __atomic_add(volatile _Atomic_word* __mem, int __val) throw ()
  { __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); }

(although I suppose in theory the compiler could tell that the value isn't used
and expand the atomic to the equivalent of C++26 store_add instead of
fetch_add).

If we change the atomic increments to use __exchange_and_add then we can check
the return value and trap if it would overflow.

> We could also consider treating the counters as unsigned, which would double
> their range. I think that could be done an an ABI-compatible way, because no
> correct programs overflow the counters into negative values today.

We can only do that for targets where long is wider than _Atomic_word, because
we need to be able to return the current count as type long from
shared_ptr::use_count(). If long is wider than _Atomic_word, we can return
make_unsigned_t<_Atomic_word>(_M_use_count) as long.

Since _M_weak_count is never observable (there's no weak_use_count() member
function) we could always treat that as unsigned, even when sizeof(long) ==
sizeof(_Atomic_word).

The __atomic_fetch_add built-in on signed integers is required to wrap like
unsigned integers without UB, but we should check that all our target-specific
implementations of __exchange_and_add (and __exchange_and_add_single) have that
property too.

Reply via email to