https://gcc.gnu.org/bugzilla/show_bug.cgi?id=122878
--- Comment #5 from GCC Commits <cvs-commit at gcc dot gnu.org> --- The master branch has been updated by Jonathan Wakely <[email protected]>: https://gcc.gnu.org/g:8aefa1855b2b69abec5627b9a6f7d9ee9fab4f85 commit r16-5795-g8aefa1855b2b69abec5627b9a6f7d9ee9fab4f85 Author: Jonathan Wakely <[email protected]> Date: Thu Nov 27 11:40:46 2025 +0000 libstdc++: Fix spinloop in atomic timed waiting function [PR122878] The __spin_until_impl function was presumably intended to just spin for a short time, then give up and let the caller wait on a futex or condvar. However, __spin_until_impl will never stop spinning unless either the value changes or the timeout is reached. This means that when __spin_until_impl returns, the caller should immediately return (because either the value we were waiting for has changed, or the timeout happened). So __wait_until_impl should never block on a futex or condvar. However, the check for the return value of __spin_until_impl would only return if the value changed (i.e. !__res._M_timeout). So if the timeout occurred, it would fall through and block on the futex/condvar, even though the timeout has already been reached. This was causing a major performance regression in the timed waiting functions of std::counting_semaphore. The simplest fix is to replace __spin_until_impl entirely, just calling __spin_impl to spin a small, finite number of times, and then return immediately if either the value changed or the timeout happened. This ensures that we don't block on the futex/condvar unnecessarily. Removing __spin_until_impl also has the advantage that we no longer keep calling steady_clock::now() on every iteration to check for a timeout. That was also adding significant overhead to the timed waiting functions. libstdc++-v3/ChangeLog: PR libstdc++/122878 * src/c++20/atomic.cc (__spin_until_impl): Remove. (__wait_until_impl): Use __spin_impl instead of __spin_until_impl and return if timeout is reached after spinning. Reviewed-by: Tomasz KamiÅski <[email protected]>
