https://gcc.gnu.org/bugzilla/show_bug.cgi?id=121148

            Bug ID: 121148
           Summary: Should use modular arithmetic for _Atomic_word
           Product: gcc
           Version: 16.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: libstdc++
          Assignee: unassigned at gcc dot gnu.org
          Reporter: redi at gcc dot gnu.org
            Blocks: 71945
  Target Milestone: ---

The std::atomic<integral-type>::fetch_xxx members are required to never
overflow:

Remarks: [...] for signed integer types the result is as if the object value
and parameters were converted to their corresponding unsigned types, the
computation performed on those types, and the result converted back to the
signed type.
[Note 2 : There are no undefined results arising from the computation. — end
note]

So std::atomic<int>(INT_MAX).fetch_add(1) gives INT_MIN, not undefined
overflow.

Our atomic helper functions __gnu_cxx::__atomic_add and
__gnu_cxx::__exchange_and_add should behave the same, to reduce surprises and
avoid bugs.

The default implementations are defined in terms of the __atomic_fetch_add
built-in, so they're fine, but the single-threaded fallbacks and some
alternative implementations can overflow.

e.g. in include/ext/atomicity.h:

  inline _Atomic_word
  __attribute__((__always_inline__))
  __exchange_and_add_single(_Atomic_word* __mem, int __val)
  {
    _Atomic_word __result = *__mem;
    *__mem += __val;
    return __result;
  }

Since _Atomic_word is always a signed type (usually int, but long on 64-bit
sparc), this should use an unsigned type for the arithmetic, e.g.

    _Atomic_word __result = *__mem;
#if __cplusplus >= 201103L
    std::make_unsigned<_Atomic_word>::type __u;
#else
    unsigned long long __u;
#endif
    __u = __result;
    __u += __val;
    *__mem = __u;
    return __result;

And similarly for __atomic_add.

And we have this in config/cpu/generic/atomicity_mutex/atomicity.h

  _Atomic_word
  __attribute__ ((__unused__))
  __exchange_and_add(volatile _Atomic_word* __mem, int __val) throw ()
  {
    __gnu_cxx::__scoped_lock sentry(get_atomic_mutex());
    _Atomic_word __result;
    __result = *__mem;
    *__mem += __val;
    return __result;
  }

This one should just be:

    __gnu_cxx::__scoped_lock sentry(get_atomic_mutex());
    return __gnu_cxx::__exchange_and_add_single(__mem, __val);

So that we don't need to repeat the logic.


Referenced Bugs:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71945
[Bug 71945] Integer overflow in use counter of shared pointers

Reply via email to