https://gcc.gnu.org/bugzilla/show_bug.cgi?id=28103

Jonathan Wakely <redi at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|SUSPENDED                   |NEW

--- Comment #3 from Jonathan Wakely <redi at gcc dot gnu.org> ---
MSVC sets badbit, just like we do:

https://github.com/microsoft/STL/blob/e36ee6c2b9bc6f5b1f70776c18cf5d3a93a69798/stl/inc/__msvc_string_view.hpp#L483
(and other lines in that function).

Libc++ sets both failbit and badbit:

https://github.com/llvm/llvm-project/blob/dc8e078a59a65a8e2b4dd13954bfa497b30ef0e8/libcxx/include/__ostream/basic_ostream.h#L514

The requirement to set failbit came from
https://cplusplus.github.io/LWG/issue211 and certainly seems consistent with
other input operations, which set failbit not badbit.

N.B. https://wg21.link/p1264r2 cleaned up the wording for input operations,
making it clear that badbit should only be set by the library after an input
operation throws an exception.

The libc++ behaviour seems like a reasonable compromise between what the
standard says and what existing user code might be expecting (based on the
behaviour of both MSVC and libstdc++).

I'm unsuspending this to reopen it. I think we should *at least* set failbit.
I'm ambivalent whether we continue to set badbit as well.

Reply via email to