[Bug driver/13071] no easy way to exclude backward C++ headers from include path

2022-03-17 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=13071

--- Comment #10 from Harald van Dijk  ---
(In reply to Jonathan Wakely from comment #9)
> (In reply to Harald van Dijk from comment #8)
> > (In reply to Andrew Pinski from comment #7)
> > > Isn't doing the extern "C" around standard C++ headers declared by the C++
> > > standard as undefined behavior?
> > 
> > It is (as is doing extern "C++" around standard C++ headers, for that
> > matter),
> 
> Where does it say that?

It's the exact same rule as for extern "C", [using.headers]p3. (And yes, it
does make a difference to not having extern "C++" around it and can cause
breakage, this is not purely hypothetical, but it's much less likely to cause
problems than extern "C".)

> > but  only became a standard C++ header in C++11. This
> > bug is from 2003 and the comment before yours was from 2009, so I think
> >  was not a standard C++ header yet.
> 
> Agreed. But there is no complex.h in the backward directory now.
>[...]
> Can we close this now?

Testing with GCC 11 as provided by Ubuntu, including  will cause
GCC's c++/11/complex.h to be included, both in C++03 and in C++11 modes, but in
C++03 it just delegates to glibc's . To me, that looks like it's
fixed.

[Bug c++/97279] GCC ignores the operation definition of the template

2020-10-05 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97279

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #2 from Harald van Dijk  ---
(In reply to Marek Polacek from comment #1)
> Hmm, ICC also produces 2.  Not quite sure what's going on here.

ICC produces 2 by default, but produces 1 when the -strict-ansi command line
option is used.

[Bug c/97370] comedy of boolean errors for '!a & (b|c)'

2020-10-11 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97370

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #1 from Harald van Dijk  ---
> * 'f' is incorrectly diagnosed even though it's the same thing as 'i' after 
> commuting the operands of '&'. ('i' is correctly allowed.)

When an expression is written as !a & b, it is possible the user intended !(a &
b). If it is rewritten as b & !a, it is clear that the user did not intend !(b
& a).

> * The diagnostic for 'f' suggests 'g', but 'g' produces the same diagnostic.

Indeed, and that looks like a bad suggestion by GCC to me. The diagnostic for
'f' should be suggesting (!a) rather than !(a), which does manage to suppress
the diagnostic.

> * The diagnostic for 'f' sugggests 'h', but 'h' produces a different
diagnostic.

Although in general, informing the user that they may have wanted to use ~ may
be useful, I personally think that suggestion should be dropped if the operand
is of type _Bool/bool. You're correct that bool & ~bool will have the intended
result but my opinion is that that is overly clever code that hurts
readability, and GCC should not be offering that as a suggestion.

[Bug c/97370] comedy of boolean errors for '!a & (b|c)'

2020-10-12 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97370

--- Comment #3 from Harald van Dijk  ---
(In reply to eggert from comment #2)
> That's so unlikely as to not be worth worrying about.

See PR 7543 for the history of that warning.

> And even if it were
> more likely, the same argument would apply to !a && b.

A very significant difference is that !a && b is commonly seen where it is
exactly what the programmer wanted. For !a & b, that is not generally the case.

Perhaps the warning could be suppressed specifically for boolean variables,
since those make it more likely that the (!a) & b meaning is exactly what is
intended?

> The GCC documentation says the motivation for warning about ~bool is that
> it's very likely a bug in the program. This motivation does not apply to
> bool & ~bool, so it'd be better to not warn for that case.

Agreed. Apologies for the confusion there, I was trying to say I think the
suggestion to use ~ should be dropped, in which case the warning generated for
the ~ form becomes unrelated to your issue. I was not trying to say that the
warning generated for the ~ form should be kept.

[Bug c++/99362] invalid unused result

2021-03-03 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99362

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #3 from Harald van Dijk  ---
(In reply to Jakub Jelinek from comment #1)
> [[nodiscard]] on a constructor makes no sense.

[[nodiscard]] on constructors doesn't apply to the implicit return value forced
by some ABIs being discarded, it applies to the constructed type being
discarded, see
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1771r1.pdf, and can
make sense if it is used for that.

[Bug c/99577] New: Non-constant (but actually constant) initializers referencing other constants no longer diagnosed as of GCC 8

2021-03-13 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99577

Bug ID: 99577
   Summary: Non-constant (but actually constant) initializers
referencing other constants no longer diagnosed as of
GCC 8
   Product: gcc
   Version: 10.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: harald at gigawatt dot nl
  Target Milestone: ---

GCC 8 and newer no longer issue an error for

  const int i = 0;
  const int j = i;

Up until GCC 7, this resulted in

test.c:2:15: error: initializer element is not constant
 const int j = i;
   ^

As in the similar (and perhaps related?) bug #66618, the standard does not
require a diagnostic for this code, but this code is not portable, it gets
rejected by some other compilers, so an option in GCC to diagnose this would be
useful.

This bug is the opposite of bug #53091, which asks for this to be accepted and
was never updated after GCC started to accept it. As noted in that bug, clang
accepts this as well without any diagnostic. I will report it as an issue to
them too, if it has not been reported already.

[Bug c++/97755] Explicit default constructor is called during copy-list-initialization with a warning only

2020-11-08 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97755

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #1 from Harald van Dijk  ---
This may be in order to ensure that the following valid C++03 code is accepted
in C++11 mode as well, to limit the impact when the default language version
was changed:

  struct A {
explicit A(int = 24);
  };
  int main() {
A a[1] = {};
  }

This did not get diagnosed in GCC 5 in any mode. GCC 6 accepts it without a
warning in C++03 mode, and accepts it with a warning in C++11 mode.

[Bug c++/100309] [11 regression] false positive -Wstringop-overflow/stringop-overread/array-bounds on reinterpret_cast'd integers

2021-04-28 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100309

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #1 from Harald van Dijk  ---
This is a duplicate of bug 99578.

[Bug c/100353] [11/12 Regression] Accepts invalid label

2021-04-30 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100353

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #1 from Harald van Dijk  ---
This was intentional, this will become valid C so this is now only diagnosed
with -pedantic and only in modes before C2x:
https://gcc.gnu.org/onlinedocs/gcc/Mixed-Labels-and-Declarations.html

[Bug c++/100700] -Wreturn-type has many false positives

2021-05-21 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100700

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #6 from Harald van Dijk  ---
If the warning should mention -fstrict-enums, it should only do it in specific
cases:

enum E { A, B, C };

int h(E e) {
  switch (e) {
  case A: return 0;
  case B: return 0;
  case C: return 0;
  }
}

will still trigger the warning even with -fstrict-enums because now (E)3 is
valid.

[Bug bootstrap/100731] New: GCC 11 fails to build using GCC 4.8 because of missing includes

2021-05-23 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100731

Bug ID: 100731
   Summary: GCC 11 fails to build using GCC 4.8 because of missing
includes
   Product: gcc
   Version: 11.1.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: bootstrap
  Assignee: unassigned at gcc dot gnu.org
  Reporter: harald at gigawatt dot nl
  Target Milestone: ---

When building GCC 11 with GCC 4.8 on a platform without _GLIBCXX_USE_C99, the
build fails. The result is:

../../gcc-11.1.0/c++tools/server.cc: In function ‘void internal_error(const
char*, ...)’:
../../gcc-11.1.0/c++tools/server.cc:199:10: error: ‘exit’ was not declared in
this scope
   exit (2);
  ^
../../gcc-11.1.0/c++tools/server.cc: In function ‘void error(const char*,
...)’:
../../gcc-11.1.0/c++tools/server.cc:233:10: error: ‘exit’ was not declared in
this scope
   exit (1);
  ^
../../gcc-11.1.0/c++tools/server.cc: In function ‘void print_usage(int)’:
../../gcc-11.1.0/c++tools/server.cc:284:15: error: ‘exit’ was not declared in
this scope
   exit (status);
   ^
../../gcc-11.1.0/c++tools/server.cc: In function ‘void print_version()’:
../../gcc-11.1.0/c++tools/server.cc:299:10: error: ‘exit’ was not declared in
this scope
   exit (0);
  ^
../../gcc-11.1.0/c++tools/server.cc: In function ‘int
maybe_parse_socket(std::string&, module_resolver*)’:
../../gcc-11.1.0/c++tools/server.cc:828:48: error: ‘strtoul’ was not declared
in this scope
unsigned port = strtoul (cptr + 1, &endp, 10);
^
../../gcc-11.1.0/c++tools/server.cc: In function ‘void internal_error(const
char*, ...)’:
../../gcc-11.1.0/c++tools/server.cc:200:1: warning: ‘noreturn’ function does
return [enabled by default]
 }
 ^
../../gcc-11.1.0/c++tools/server.cc: In function ‘void print_version()’:
../../gcc-11.1.0/c++tools/server.cc:300:1: warning: ‘noreturn’ function does
return [enabled by default]
 }
 ^
../../gcc-11.1.0/c++tools/server.cc: In function ‘void print_usage(int)’:
../../gcc-11.1.0/c++tools/server.cc:285:1: warning: ‘noreturn’ function does
return [enabled by default]
 }
 ^
../../gcc-11.1.0/c++tools/server.cc: In function ‘void error(const char*,
...)’:
../../gcc-11.1.0/c++tools/server.cc:234:1: warning: ‘noreturn’ function does
return [enabled by default]
 }
 ^

If the functions from  are wanted, this file should just include
 directly rather than relying on C++ headers pulling it in.

This happens for me on uclibc but is reproducible on glibc by locally modifying
GCC 4.8's c++config.h to not define _GLIBCXX_USE_C99.

[Bug bootstrap/100731] GCC 11 fails to build using GCC 4.8 because of missing includes

2021-05-23 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100731

--- Comment #1 from Harald van Dijk  ---
The full configure line I used for reproducing this on glibc, btw:

  ../gcc-11.1.0/configure --prefix=$HOME/gcc-11.1.0-run CC=gcc-4.8.5
CXX=g++-4.8.5 --enable-languages=c,c++

[Bug bootstrap/100731] GCC 11 fails to build using GCC 4.8 because of missing includes

2021-05-23 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100731

--- Comment #2 from Harald van Dijk  ---
There are more missing or wrong includes here: looking at the code, it's also
using functions from  without including that, but that one gets
implicitly included for me even on this old G++ so happens to not cause an
error. It's also using strrchr from  while only including 
and not qualifying as std::strrchr, which will work on all GCC versions but is
not correct.

[Bug c++/100731] [11/12 Regression] GCC 11 fails to build using GCC 4.8 because of missing includes

2021-05-25 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100731

--- Comment #4 from Harald van Dijk  ---
(In reply to Richard Biener from comment #3)

Yes, including  is enough to get the build to pass. My last point in
comment #2, however, means that that leaves things in an inconsistent state and
that the right fix depends on what the project wants. There are basically two
options that look equally reasonable to me: adding an include of 
(and, although not required to fix the build, ) and adding std::
qualifiers to everything that needs it, or adding an include of  (and
) and changing the other existing  includes to <*.h>. Happy to
send a patch for whichever of these is preferred.

[Bug c++/100731] [11/12 Regression] GCC 11 fails to build using GCC 4.8 because of missing includes

2021-05-25 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100731

--- Comment #6 from Harald van Dijk  ---
(In reply to rguent...@suse.de from comment #5)
> At this point a minimal fix is prefered - in principle the file
> should be a valid source to any C++ 11 capable host compiler, not
> just GCC.  The maintainer is on leave but we do want the build to
> be fixed.  Now, since the file already includes csingal/cstring and 
> cstdarg I'd say using the C++ wrapper to C includes and qualifying
> the calls would be consistent with existing use (thus not including
> stdlib.h but cstdlib).

The minimal fix is the other one, to change the headers to <*.h>, as none of
the calls to library functions in the file are std::-qualified. :) Alright,
I'll send a patch for that once I'm able to test that the same problem is still
present on master and that the same fix is sufficient to get things working.

[Bug c++/100731] [11/12 Regression] GCC 11 fails to build using GCC 4.8 because of missing includes

2021-05-25 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100731

--- Comment #8 from Harald van Dijk  ---
I take it that means there's no need for me to continue with what Richard asked
me to do?

At any rate, it looks like this fix won't be enough for GCC 12, but that's an
issue with the environment, not GCC 12. In !_GLIBCXX_USE_C99 environments,
there is always going to be valid C++11 code that will be rejected and it looks
like GCC 12 started using at least one std::to_string that relies on C99 in the
underlying C library. If C++11 is a bootstrap requirement, that's fair enough,
that means I need to update my environment at some point before GCC 12 will be
released.

[Bug c++/100805] __int128 should be disabled for non-extended -std= options

2021-05-27 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100805

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #2 from Harald van Dijk  ---
(In reply to Andreas Schwab from comment #1)
> The C++ standard says: [lex.icon] "If an integer literal cannot be
> represented by any type in its list and an extended integer type (6.8.1) can
> represent its value, it may have that extended integer type."

__int128 behaves mostly like an integer type but is not an "extended integer
type" as defined in the standard. Quoting from
https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html: "GCC does not
support any extended integer types." Extended integer types must meet specific
requirements that __int128 does not meet: extended integer types cannot be
larger than intmax_t, and __int128 is.

Despite __int128 not being an extended integer type, there is nothing wrong
with having __int128 enabled in standards-conforming mode. Out-of-range
constants must be diagnosed, but they already are, and continuing to accept the
program after that is valid.

The warning that is generated for the out-of-range constant is highly
misleading though: the warning says "integer constant is so large that it is
unsigned". Either the constant should be given an unsigned type, or the warning
should be updated to reflect the type the constant actually gets.

[Bug sanitizer/71458] ICE with -fsanitize=bounds

2021-06-02 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71458

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #9 from Harald van Dijk  ---
I know the GCC 5 branch is long closed and this will not be fixed, but for
completeness, the backport to GCC 5 was wrong: error (UNKNOWN_LOCATION, "...")
should have been error ("..."). error (UNKNOWN_LOCATION, "...") compiles but
does the wrong thing: UNKNOWN_LOCATION is interpreted as a null pointer format
string and causes a segfault.

[Bug libstdc++/101234] Two tests require en_US.ISO-8859-15 but glibc no longer installs that by default.

2021-06-28 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101234

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #2 from Harald van Dijk  ---
(In reply to Ken Moffat from comment #0)
> In 22_locale/codecvt/out/wchar_t/3.cc and 22_locale/codecvt/in/wchar_t/3.cc
> the tests are only run if dejagnu finds the en_US.ISO-8859-15 locale. I
> assume that glibc used to install that when installing all locales (e.g.
> [https://centos.pkgs.org/8-stream/centos-baseos-x86_64/glibc-langpack-en-2.
> 28-155.el8.x86_64.rpm.html] implies that), but recent glibc does not
> automatically install it.

This locale was and is a Red Hat change, not something that used to be part of
official glibc:
.
This patch is still used by Red Hat now.

(Not that that means the test should continue using this locale.)

[Bug c++/100409] C++ FE elides pure throwing call

2021-07-08 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100409

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #4 from Harald van Dijk  ---
The documentation for the pure attribute refers to "functions that have no
observable effects on the state of the program other than to return a value"
which implies not throwing exceptions, the Wsuggest-attribute=pure text says
that pure functions have to return normally, and the presence of throw
statements suppresses the compiler's suggestion to mark functions as pure. This
function throws, so should the fact that it is marked pure not simply make the
whole thing undefined?

[Bug c++/101376] New: Missing Wsuggest-attribute=const/Wsuggest-attribute=pure for throwing functions, wrong Wattributes for pure/const throwing functions

2021-07-08 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101376

Bug ID: 101376
   Summary: Missing
Wsuggest-attribute=const/Wsuggest-attribute=pure for
throwing functions, wrong Wattributes for pure/const
throwing functions
   Product: gcc
   Version: 11.1.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: harald at gigawatt dot nl
  Target Milestone: ---

According to PR100409, const/pure functions are allowed to throw. As such, I
would expect that

  void f() {
throw "!";
  }

produces a diagnostic with -Wsuggest-attribute=const but no diagnostic is
produced.

In fact, if I modify this to add the attribute myself, like so:

  __attribute__((const)) void f() {
throw "!";
  }

I get "warning: 'const' attribute on function returning 'void'". This warning
is documented but now wrong. The documentation reads

  "Because a const function cannot have any observable side effects it does not
make sense for it to return void. Declaring such a function is diagnosed."

A helper function to throw an exception makes sense to declare as returning
void, and meets the criteria of a const function.

[Bug c++/100409] C++ FE elides pure throwing call

2021-07-08 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100409

--- Comment #9 from Harald van Dijk  ---
(In reply to Richard Biener from comment #8)
> It has been consensus that throwing exceptions and const/pure are different
> concepts that co-exist.  See for example the recent discussion at
> https://gcc.gnu.org/pipermail/gcc-patches/2021-May/569435.html

Thanks, based on that I have created PR101376.

[Bug libgcc/101489] New: Documentation gives wrong signatures for libgcc float128 routines

2021-07-17 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101489

Bug ID: 101489
   Summary: Documentation gives wrong signatures for libgcc
float128 routines
   Product: gcc
   Version: 12.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: libgcc
  Assignee: unassigned at gcc dot gnu.org
  Reporter: harald at gigawatt dot nl
  Target Milestone: ---

At ,
all functions that take __float128 / _Float128 are declared as taking long
double instead. It is possible this was written back when 128-bit float types
were only available as long double, only on some platforms, so it would have
been correct at that time, but it is no longer correct now that __float128 is
more widely available.

[Bug libgcc/101489] Documentation gives wrong signatures for libgcc float128 routines

2021-07-28 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101489

--- Comment #2 from Harald van Dijk  ---
Ah, thanks for the pointer. Agreed that the signatures are correct based on
that, but they are not exactly clear as they make it impossible to tell apart
the xf and tf cases. Please consider this as an enhancement request, then,
rather than a bug.

My reason for filing this bug was that I noticed something in compiler-rt that
I suspect may be caused by the unclear libgcc documentation. For the most part,
tf implementations are done with long double, but guarded to only apply to
platforms where that is correct. This is fine, it leaves functions undefined on
other platforms but never results in an incorrect definition. There is however
also at least one tf function that is unconditionally done with long double,
which matches the signature in the documentation, but is wrong.

[Bug preprocessor/101864] New: Segmentation fault with -Wtraditional + glibc 2.34

2021-08-11 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101864

Bug ID: 101864
   Summary: Segmentation fault with -Wtraditional + glibc 2.34
   Product: gcc
   Version: 11.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: preprocessor
  Assignee: unassigned at gcc dot gnu.org
  Reporter: harald at gigawatt dot nl
  Target Milestone: ---

This testcase comes from glibc 2.34's sys/cdefs.h, so this is a problem showing
up for any program that uses any of glibc's headers with -Wtraditional.

$ cat test.c
#define __glibc_has_attribute(attr) __has_attribute(attr)
#if __glibc_has_attribute(__deprecated__)
#endif

$ gcc -Wtraditional -E test.c
# 0 "test.c"
# 0 ""
# 0 ""
# 1 "/usr/include/stdc-predef.h" 1 3 4
# 0 "" 2
# 1 "test.c"
test.c:1: internal compiler error: Segmentation fault
1 | #define __glibc_has_attribute(attr) __has_attribute(attr)
  | 
0x1542e3f internal_error(char const*, ...)
???:0
0x156fa80 cpp_sys_macro_p(cpp_reader*)
???:0
0x155f868 cpp_classify_number(cpp_reader*, cpp_token const*, char const**,
unsigned int)
???:0
0x1561509 _cpp_parse_expr
???:0
0x155bf8f _cpp_handle_directive
???:0
0x156ace1 _cpp_lex_token
???:0
0x6aece2 preprocess_file(cpp_reader*)
???:0
0x6ad33f c_common_init()
???:0
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See  for instructions.

$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-linux-gnux32/11.2.0/lto-wrapper
Target: x86_64-linux-gnux32
Configured with: /h/gcc-11.2.0/configure --prefix=/usr
--datadir=/usr/share/gcc-11.2.0 --infodir=/usr/share/info
--mandir=/usr/share/man --build=x86_64-linux-gnux32 --host=x86_64-linux-gnux32
--enable-languages=c,c++,d,objc,obj-c++,fortran,ada,go --disable-libstdcxx-pch
--enable-version-specific-runtime-libs --with-abi=x32
--with-multilib-list=mx32,m64,m32 --with-system-zlib
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 11.2.0 (GCC)

[Bug preprocessor/101864] Segmentation fault with -Wtraditional + glibc 2.34

2021-08-11 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101864

Harald van Dijk  changed:

   What|Removed |Added

 Resolution|--- |DUPLICATE
 Status|UNCONFIRMED |RESOLVED

--- Comment #1 from Harald van Dijk  ---
Thanks Bugzilla for only showing me this is a duplicate after submitting :)

*** This bug has been marked as a duplicate of bug 101638 ***

[Bug preprocessor/101638] [11/12 Regression] ICE with -Wtraditional since r11-4953-g1d00f8c86324c40a

2021-08-11 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101638

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #2 from Harald van Dijk  ---
*** Bug 101864 has been marked as a duplicate of this bug. ***

[Bug c/101953] bug on the default cast operator from double to unsigned short

2021-08-18 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101953

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #22 from Harald van Dijk  ---
(In reply to M W from comment #3)
> If you think this is the correct behavior, remember, it is unexpected. At
> least give a warning when a value is set to zero instead of a faithful
> attempt to return the correct bits.

Agreed. Without special options, I think this behaviour is defensible,
undefined is undefined, but I would have expected -fsanitize=undefined to catch
this. clang's -fsanitize=undefined does catch this, gcc's doesn't (tested in
Ubuntu's version of GCC 10).

[Bug c/101953] bug on the default cast operator from double to unsigned short

2021-08-18 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101953

--- Comment #27 from Harald van Dijk  ---
(In reply to jos...@codesourcery.com from comment #25)
> The option to use to detect this is -fsanitize=float-cast-overflow (note: 
> I haven't tested if it detects this particular case).  As per the manual: 
> "Unlike other similar options, @option{-fsanitize=float-cast-overflow} is 
> not enabled by @option{-fsanitize=undefined}.".  (Annex F makes the result 
> an unspecified value with "invalid" raised, instead of being undefined 
> behavior, which justifies not including it in -fsanitize=undefined by 
> default.

Have just tested that -fsanitize=float-cast-overflow does indeed catch this
case. Thanks for the explanation, that makes sense.

The fact that it's not included by -fsanitize=undefined even in compilations
where Annex F is not followed or does not apply is a bit weird, but changing it
to be included was bug #100591, closed as invalid; I won't open a new bug
asking for the same thing again.

(In reply to M W from comment #24)
> I know it is documented as "undefined," but it is also unexpected without
> even a warning.

Martin Sebor requested opening a new bug if you'd like to see a compile-time
warning for this, rather than tracking that as part of this bug.

(In reply to M W from comment #26)
> pi@raspberrypi:~ $ gcc -fsanitize=float-cast-overflow -Wall -o badpi badpi.c
> -lm
> pi@raspberrypi:~ $ 
> 
> That flag doesn't work

The flag does work, but it's a runtime warning, not a compile-time warning. Run
badpi and you should see it.

[Bug libstdc++/58876] No non-virtual-dtor warning in std::unique_ptr

2021-08-31 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58876

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #12 from Harald van Dijk  ---
(In reply to Jonathan Wakely from comment #11)
> No, probably not. Comment 2 doesn't work because -Wsystem-headers can't be
> enabled and disabled using pragmas. It doesn't work like other warnings.

However, the internal version of the #line directive, # [line number] [file
name] [flags] could be used to mark a region of a system header as non-system
header, which should achieve the same result, right? It might need a bit of
cleanup to be maintainable, but this seems to work as a proof of concept:

--- bits/unique_ptr.h
+++ bits/unique_ptr.h
@@ -82,7 +82,9 @@
 "can't delete pointer to incomplete type");
static_assert(sizeof(_Tp)>0,
 "can't delete pointer to incomplete type");
+# 86 __FILE__
delete __ptr;
+# 88 __FILE__ 3
   }
 };

[Bug c++/102201] Accepts invalid C++98 with nested class and sizeof of outer's non-static field

2021-09-04 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102201

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #1 from Harald van Dijk  ---
This doesn't need inner classes, a simpler reproducer is:

struct S { int i; };
int j = sizeof S::i;

gcc accepts this in all modes ever since the C++11 rule for non-static members
in unevaluated contexts was implemented (4.4). clang says in C++98 mode:

test.cc:2:19: error: invalid use of non-static data member 'i'
int j = sizeof S::i;
   ~~~^
1 error generated.

[Bug driver/13071] no easy way to exclude backward C++ headers from include path

2021-09-06 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=13071

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #8 from Harald van Dijk  ---
(In reply to Andrew Pinski from comment #7)
> Isn't doing the extern "C" around standard C++ headers declared by the C++
> standard as undefined behavior?

It is (as is doing extern "C++" around standard C++ headers, for that matter),
but  only became a standard C++ header in C++11. This bug is from
2003 and the comment before yours was from 2009, so I think  was not
a standard C++ header yet.

[Bug middle-end/113959] Optimize `__builtin_isnan(x) || __builtin_isinf(x)` to `__builtin_isfinite(x)`

2024-02-16 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113959

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #2 from Harald van Dijk  ---
See also bug #66462: the code currently generated is wrong under
-fsignaling-nans, a patch has been posted to fix that, and had then been
forgotten. The patch may possibly improve codegen regardless of
-fsignaling-nans.

[Bug c++/114104] nodiscard not diagnosed on synthesized operator!=

2024-02-25 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114104

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #2 from Harald van Dijk  ---
Isn't this behaving as designed as far as nodiscard goes? x != 0 is defined to
be evaluated as !(x == 0) per [over.match.oper]p9, where the result of x == 0
is not a discarded-value expression, and therefore nodiscard suggests no
warning for it.

That said, there is a missing general warning in GCC about the built-in !
operator being discarded:

  bool f();
  int main() {
!f(); // clang: warning: expression result unused [-Wunused-value]
  // gcc: no warning
  }

For similar useless operations, such as f() ^ true;, GCC emits a similar
warning "warning: value computed is not used [-Wunused-value]". Presumably, if
that warning were implemented in GCC for ! as well, it should also fire for
your original x != 0 test?

[Bug middle-end/94083] inefficient soft-float x!=Inf code

2024-02-28 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94083

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #4 from Harald van Dijk  ---
(In reply to Jakub Jelinek from comment #3)
> Shall __builtin_isinf (x) or __builtin_isinf_sign (x) raise exception if x
> is a sNaN?
> Or never? Or it can but doesn't have to?

Never.

See also bug #66462 which also has a not-quite-right patch that was committed
and reverted, and the fixed patch posted but never committed and then
forgotten. I'm not 100% sure of the impact of that patch on soft-float but at a
quick glance it seems to use bitwise integer arithmetic which should avoid
libcalls entirely.

[Bug middle-end/94083] inefficient soft-float x!=Inf code

2024-02-28 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94083

--- Comment #7 from Harald van Dijk  ---
(In reply to Joseph S. Myers from comment #6)
> Contrary to what was claimed in bug 66462, I don't think there ever was a
> fixed patch. Note that in bug 66462 comment 19, "June" is June 2017 but
> "November" is November 2016 - the "November" one is the *older* one.

Ah, sorry, I misunderstood the situation. According to
 the earlier
version of that patch (the November 2016 one) was the one that did not have the
problems that caused it to be reverted. In response to review of that, big
changes were requested and in the process bugs were introduced. The buggy
version was then committed and reverted. The original version that did not have
those bugs could still be committed if a re-review, taking into account the
bugs that the rework introduced, would now see it as acceptable.

[Bug c++/114163] Calling member function of an incomplete type compiles in gcc and does not compile in clang and msvc

2024-02-29 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114163

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #1 from Harald van Dijk  ---
You may be right that the class is not complete at that point but there is no
requirement for the class to be complete. CWG1836 changed that to "In both
cases, the class type shall be complete unless the class member access appears
in the definition of that class." Here, the class member access does appear in
the definition of the class, so it's okay that the class is incomplete.

[Bug tree-optimization/114363] inconsistent optimization of pow(x,2)+pow(y,2)

2024-03-16 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114363

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #1 from Harald van Dijk  ---
This is, I believe, correct. Before C++11, calling std::pow with float and int
arguments, it returned a float. As of C++11, it returns a double.

If the result of pow(x,2) is immediately converted to float, then it is a valid
optimisation to convert it to x*x: that is guaranteed to produce the exact same
result. But if it isn't, then converting to x*x loses accuracy and alters the
result.

You can call std::powf instead of std::pow to avoid the promotion to double.

[Bug c/114526] ISO C does not prohibit extensions: fix misconception.

2024-03-28 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114526

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #4 from Harald van Dijk  ---
(In reply to Andrew Pinski from comment #2)
> Actually it is a required diagnostic.

It is not.

> See PR 11234 for explanation on how.

As acknowledged by PR 11234's reporter in his comment #10 there, the
description of the bug explains why such code is undefined, but not why it
violates any syntax rule or constraint. Indeed, it violates no syntax rule or
constraint and accordingly no diagnostic is required, the reporter has since
agreed.

[Bug c/114526] ISO C does not prohibit extensions: fix misconception.

2024-03-28 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114526

--- Comment #6 from Harald van Dijk  ---
(In reply to Joseph S. Myers from comment #5)
> The -pedantic documentation was updated to reflect reality - that the option
> is about more than just when diagnostics are required by ISO C ("forbidden
> extensions" can be taken, in the C case, as meaning those that involve
> constraint violations or are outside the standard C syntax) but covers some
> other programs doing things not defined in ISO C as well - in commit
> 074e95e34275d72664f997ed949d9c91e37cd6ee (July 2000). I don't think any
> possible narrower intent there may have been long before then is
> particularly relevant now.

Actually, the narrower intent is still documented at
. Arguably, this
behaviour makes -ansi -pedantic-errors a non-conforming implementation because
it rejects code that is only undefined at runtime, which ISO C requires
implementations to accept unless the implementation can prove the code is
reached.

[Bug c/114526] ISO C does not prohibit extensions: fix misconception.

2024-04-02 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114526

--- Comment #9 from Harald van Dijk  ---
(In reply to Joseph S. Myers from comment #8)
> "rejects", in the ISO C sense, only applies to errors and pedwarns in GCC;
> not to warnings conditional on -pedantic (of which there are also some, but
> which don't turn into errors with -pedantic).
> 
> If you have cases where something that is only *undefined as a property of a
> particular execution of the program* (as opposed to undefined as a property
> of a translation unit or of the collection of translation units making up a
> program, or violating a Constraint or syntax rule) but that are errors or
> pedwarns, those should be reported as separate bugs.

Bug 83584, which like this one is closed as a duplicate of 11234, is about
exactly that.

  void *f(void) { return (void *)f; }
  int main(void) { return 0; }

This is a strictly conforming program. It violates no syntax rule or
constraint, and exhibits no translation-time undefined behaviour, yet it
triggers a pedwarn, turning into an error with -pedantic-errors. It would have
undefined behaviour i f f were ever called, but it is not called.

[Bug c/114526] ISO C does not prohibit extensions: fix misconception.

2024-04-02 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114526

--- Comment #10 from Harald van Dijk  ---
Sorry, sent my earlier comment too soon.

(In reply to Joseph S. Myers from comment #8)
> I believe conversions between function and object pointers are undefined as
> a property of the translation unit - not of a particular execution.

But there is nothing in the standard to support this. The standard fully
defines the behaviour of the program I posted, which is just to return 0.

[Bug c/114526] ISO C does not prohibit extensions: fix misconception.

2024-04-02 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114526

--- Comment #14 from Harald van Dijk  ---
(In reply to Joseph S. Myers from comment #11)
> I think that simply failing to say whether a value of type X may be
> converted to type Y is clearly enough for it at least to be unspecified
> whether or when such conversions are possible in a cast at all (which is
> enough for rejecting the translation unit).

I disagree. You're reading something into the standard that it does not say
anywhere. It would make sense if it did say that, but it doesn't.

> And since no requirements are
> imposed relating to such conversions at either translation time or runtime,
> the definition of undefined behavior is met.

The behaviour at runtime is implicitly unspecified. The behaviour at
translation time is not, as my program does not attempt to convert between any
function and object pointer. Performing that conversion is undefined by
omission. Writing code that *would* perform that conversion, if executed, is
not undefined, because the standard defines the behaviour of code that is not
executed: it does nothing.

I am assuming, at least, that there is no dispute that

  #include 
  int main(void) {
if (0) puts("Hello, world!");
return 0;
  }

has never been permitted to print "Hello, world!".

(In reply to Kaz Kylheku from comment #12)
> It does not. You're relying on the implementation (1) glossing over the
> undefined conversion at translation time (or supporting it as an extension)

I'm not.

> Undefined behavior means that the implementation is permitted to stop, at
> translation or execution time, with or without the issuance of a diagnostic
> message.

My program has no undefined behaviour. It is in the same category as

  void f(void) { 1/0; }
  int main(void) { return 0; }

which is strictly confirming despite the division by zero in an uncalled
function, despite division taking constant operands. The implementation is not
at liberty to treat this as translation-time undefined behaviour, because the
program does not divide by zero. Even though constant expressions could
otherwise be optimised. And GCC rightly accepts this with -pedantic-errors.

It is in the same category as

  void f(x) void *x; { f(f); }
  int main(void) { return 0; }

which is strictly conforming despite the unevaluated function call to an
unprototyped function with a wrong argument type where the compiler knows the
type of the parameter. And GCC rightly accepts this too with -pedantic-errors.
It does not even warn by default.

There have been DRs that explicitly address this for various forms of undefined
behaviour (#109, #132, #317). Is it really necessary for WG14 to receive a
separate DR for every category of undefined behaviour to clarify that the same
rules for undefined behaviour apply everywhere?

[Bug c/114526] ISO C does not prohibit extensions: fix misconception.

2024-04-02 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114526

--- Comment #16 from Harald van Dijk  ---
(In reply to Joseph S. Myers from comment #15)
> In the cases where there is no statement either way, the behavior is
> undefined as a property of the translation unit (not just of the execution):
> it is not defined whether such a conversion may occur in a translation unit,

This is still not stated anywhere in the standard though.

> Being undefined
> through omission of definition has, as per clause 4, not difference in
> meaning or emphasis from being explicitly undefined.

Of course, but if the standard had explicitly stated that conversion between
function pointers and object pointers was undefined, it might be phrased in a
way that applies even to dead code. If you are relying on being undefined by
omission, you have to be really sure the behaviour is not defined *anywhere*,
including by general rules about dead code.

I will grant that the standard never explicitly says dead code is not executed
and has no effect, but if this is in dispute, we have a bigger problem.

> I'd suggest working with the Undefined Behavior Study Group on making it
> more explicit for each instance of undefined behavior whether it is a
> property of the program or of an execution thereof, but if any case seems
> particularly unclear, filing an issue once the new C standard issue tracker
> is up and running would probably be reasonable (but it seems likely that
> such issues would be referred to the UB study group to recommend a
> resolution just as floating-point issues would likely be referred to the CFP
> group).

Considering my stance is that WG14 have repeatedly and consistently stated what
the rules are, I see this as a waste of their time.

> It's *not* the case that the same rules apply everywhere, because there are
> two different kinds of UB depending on whether what's undefined is a
> property of the program or an execution thereof. Division by zero is
> obviously UB as a property of an execution, because whether a value is zero
> is a property of the execution.

Considering this example of 1/0 has been the subject of two separate DRs that I
referenced, I have to say it is not obvious from the standard itself. Keeping
in mind that the operands are constants and implementations are required to be
capable of constant expression evaluation in some contexts, a hypothetical
standard that permitted, or even required, this to be evaluated at translation
time (with undefined behaviour) even in otherwise dead code would make perfect
sense. But that is not the C standard we have, at least not the official
interpretation of it.

> Different types for the same identifier with
> external linkage in different translation units is obviously UB as a
> property of the program (and not widely diagnosed without LTO), as the whole
> concept of an identifier corresponding to an object with a particular value
> depends on a globally consistent notion of its type and the UB is about
> presence of declarations rather than a particular path of execution.

Yes, because a program that does not reference these identifiers still violates
the rule that specifies they must have compatible type. This means that there
is no execution of the program that avoids UB.

But in my program, there is no rule that is violated. Perhaps the rule that you
describe in your comment, that no program may contain any unsupported
conversion anywhere, regardless of whether the conversion is ever performed,
should exist, but it is simply not the case that there is such a rule to be
found anywhere in the standard.

One additional comment, though:

The fact that conversions between function pointers and object pointers are
rejected under -pedantic-errors mean that 'gcc -std=c99 -pedantic-errors'
cannot  be used as the implementation for POSIX's c99 utility, as POSIX's c99
utility is required to conform to the C99 standard, and simultaneously, permit
conversions between function pointers and object pointers (at least in some
cases). (Adjust for later versions as needed.) This is unfortunate, and
regardless of whether the C standard allows such programs to be rejected, can
we agree that the C standard also allows them to be accepted, and POSIX
requires them to be accepted? Is that not already sufficient reason to
reconsider?

[Bug c/114526] ISO C does not prohibit extensions: fix misconception.

2024-04-02 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114526

--- Comment #18 from Harald van Dijk  ---
(In reply to Kaz Kylheku from comment #17)
> The standrad does not define the conversion at the *type* level.
> ...
> The program is strictly conforming because it has no problem with type.

The DRs I referenced include ones where type errors have explicitly been stated
not to render behaviour undefined.

DR 132 (C90):

  /* No headers included */
  int checkup()
  {
  /* Case 1 */
  if (0)
  printf("Printing.\n");
  /* Case 2 */
  return 2 || 1 / 0;
  } 

  Response: "The Response to Defect Report #109 addresses this issue. The
translation unit must be successfully translated."

This, despite the fact that it implicitly declares as int(*)(), which is
incompatible with the type it is meant to be declared as.

The distinction you see between type errors and non-type errors is not one that
I believe is supported by previous DR responses.

> We wouldn't say that
> 
>   void f(void) { "abc" / "def"; }
> 
> is strictly conforming because f is not called in the program. There is a
> type problem. Now in this case there is a constraint violation: it requires
> a diagnostic.

My position is that it is *only* because this violates a constraint that this
cannot be part of a strictly conforming program, even if never called, as far
as standard C is concerned. That is why implementations are allowed to reject
it without program flow analysis.

> Anyway, this is all moot because this bugzilla is about GNU C, which has the
> extension. The behavior is locally defined.

Sure, I'm happy to put that aside if it becomes irrelevant to the bug.

> We would like NOT to have a diagnostic under -Wpedantic, so we are on the
> same page.
> 
> Whether your program is strictly conforming or not, we would like not to
> have it diagnosed under the -Wpedantic umbrella, and even if it is changed
> to a program which calls f.
> 
> There is nothing wrong with the diagnostic, but it should be uncoupled from
> -Wpedantic and available under its own option.   Possibly, an umbrella
> option could exist for this kind of "super pedantic" errors, like
> -Wconforming-extensions (warn about the use of GNU extensions that are
> conforming, and thus require no diagnostic by ISO C).

Agreed that having the warning is useful. If 'gcc -std=c99 -pedantic-errors'
emits a warning for this, regardless of whether it is enabled by default, that
is fine, and that does not prevent it from being a valid implementation of the
'c99' utility.

[Bug c/114526] ISO C does not prohibit extensions: fix misconception.

2024-04-03 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114526

--- Comment #20 from Harald van Dijk  ---
(In reply to Kaz Kylheku from comment #19)

Needless to say I still disagree, but I interpreted your comment #17 as
suggesting this aspect of the discussion is neither necessary nor useful for
this bug, and agreed with that in comment #18. So let's actually stop this
aspect of the discussion.

[Bug libstdc++/114645] std::chrono::current_zone ignores $TZ on Linux

2024-04-09 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114645

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #14 from Harald van Dijk  ---
(In reply to Jonathan Wakely from comment #8)
> None of libstdc++, LLVM libc++, MSVC STL or the
> date/tz.h reference implementation uses $TZ for chrono::current_zone,

This does not appear to be accurate.

libc++ appears to always uses $TZ on POSIX-like platforms if it is set:
https://github.com/llvm/llvm-project/blob/788be0d9fc6aeca548c90bac5ebe6990dd3c66ec/libcxx/src/tzdb.cpp#L708
MSVC STL calls into __icu_ucal_getDefaultTimeZone. ICU's
ucal_getDefaultTimeZone uses the platform-specific way of getting the default
time zone, which on POSIX-like platforms does check getenv("TZ"), although of
course MSVC's STL would not likely be used on POSIX-like platforms.

[Bug libstdc++/114645] std::chrono::current_zone ignores $TZ on Linux

2024-04-09 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114645

--- Comment #18 from Harald van Dijk  ---
(In reply to Jonathan Wakely from comment #16)
> ... incorrectly though?

Given that you have expressed your view that *any* attempt at using TZ is
inherently incorrect, I am not surprised that you view libc++'s attempt as
incorrect. :)

I am not intending to get involved in discussion of what the correct behaviour
is, I merely wanted to correct the record about what other implementations do.

[Bug libstdc++/114645] std::chrono::current_zone ignores $TZ on Linux

2024-04-09 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114645

--- Comment #21 from Harald van Dijk  ---
(In reply to Jonathan Wakely from comment #20)
> (In reply to Harald van Dijk from comment #18)
> > (In reply to Jonathan Wakely from comment #16)
> > > ... incorrectly though?
> > 
> > Given that you have expressed your view that *any* attempt at using TZ is
> > inherently incorrect, I am not surprised that you view libc++'s attempt as
> > incorrect. :)
> 
> That's not what I mean.

You wrote in 
which you referenced in comment #1: "And in any case, I don't think we want to
depend on $TZ. That's not the intended design of the std::chrono API." Earlier
in that thread, you had written in
: "In any case,
the C++ standard requires that current_zone() refers to the computer's zone,
not just the current process' TZ setting:"

It's fine if you changed your mind since then, but it is hard for me to read
this as any other way than that any attempt at using TZ is inherently
incorrect.

[Bug c++/115222] gcc ignores noexcept on fields' deconstructors in an union

2024-05-25 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115222

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #5 from Harald van Dijk  ---
I end up with a different reduced test case that does not involve unions:

template  _Tp declval() noexcept;

template 
inline constexpr bool is_nothrow_destructible_v = noexcept(declval<_Tp>());

struct A { ~A() noexcept(false) = delete; };
struct B : A { ~B(); };
static_assert(is_nothrow_destructible_v);

The assertion passes in GCC, fails in clang, but I think clang is right here.
It looks like GCC ignores the deleted destructor for determining whether B's
destructor should be implicitly noexcept, but the wording that Andrew Pinski
referenced in comment #2 says B's destructor is potentially throwing "if any of
the destructors for any of its potentially constructed subobjects has a
potentially-throwing exception specification" without regard to whether those
destructors are deleted.

[Bug c/115566] Arrays of character type initialized with parenthesized string literals shouldn't be diagnosed with -pedantic (at least in C23)

2024-06-22 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115566

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #8 from Harald van Dijk  ---
I believe this bug is valid; the change from

> A parenthesized expression is a primary expression. Its type and value are
> identical to those of the unparenthesized expression. It is an lvalue, a
> function designator, or a void expression if the unparenthesized expression
> is, respectively, an lvalue, a function designator, or a void expression.

to the already quoted

> A parenthesized expression is a primary expression. Its type, value, and
> semantics are identical to those of the unparenthesized expression.

only makes sense if the intent was to to also make parenthesized expressions
equivalent to unparenthesized expressions in other ways than those previously
enumerated.

But at any rate, GCC is inconsistent. The exact same argument applies to null
pointer constants. Null pointer constants are defined as

> An integer constant expression with the value 0, or such an expression cast
> to type void *, is called a null pointer constant.

Note that such an expression cast to type void *, and then wrapped in
parentheses, is not explicitly included.

Yet GCC accepts

  #define NULL ((void*)0)
  int main(void) {
void(*fp)(void) = NULL;
  }

when the same overly pedantic reading should result in "ISO C forbids
initialization between function pointer and 'void *'" here.

Either parenthesized expressions are just generally equivalent to
non-parenthesized expressions, and both the OP's code and this program are
valid, or parenthesized expressions are only like the non-parenthesized
expressions in the specifically enumerated ways, in which case both the OP's
code and this program violate a constraint and require a diagnostic. You can
choose which of those interpretations you prefer, but they both indicate a bug
in GCC.

[Bug preprocessor/115903] libcpp/macro.cc:528:19: style: Obsolete function 'asctime' called

2024-07-13 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115903

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #6 from Harald van Dijk  ---
> man asctime says marked obsolete in POSIX.1-2008, which is 16 years ago.

This is misleading. POSIX had no authority in 2008 to declare that asctime was
considered for removal (which is what their marking as "obsolescent" is
specified to mean), because this is an ISO C function, POSIX defers to the C
standard, and the C standard did not deprecate it until C23. That said, now
that it is deprecated in ISO C, the result is still the same.

[Bug c/44179] warn about sizeof(char) and sizeof('x')

2023-12-16 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=44179

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #4 from Harald van Dijk  ---
(In reply to Zack Weinberg from comment #3)
> See
> http://codesearch.debian.net/
> search?q=filetype%3Ac+%5Cbsizeof%5Cs*%5C%28%5Cs*%27&literal=0 for many
> examples.

There is a lot in there where a warning would be useful, but also a lot in
there where a warning would not be useful because the only reason the code is
doing sizeof('x') is because it's making sure that it actually is sizeof(int),
to check that it's not being compiled with a C++ compiler, or a C compiler that
picked up some C++isms by mistake. If GCC decides to warn about that, it is not
obvious how the code should be rewritten to silence the warning, but still have
the desired effect. In other places, redundant parentheses can be added to
indicate that a use is intentional ("if (a = b)" warns, "if ((a = b))" shuts up
the compiler), but sizeof('x') already has redundant parentheses. Would users
then be suggested to write sizeof(('x'))?

[Bug tree-optimization/113049] Compiles to strlen even with -fno-builtin-strlen -fno-optimize-strlen

2023-12-17 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113049

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #8 from Harald van Dijk  ---
(In reply to Georg-Johann Lay from comment #5)
> So then -fno-builtin should also not work? GCC documentation of -fno-builtin
> is the same like for -fno-builtin-function.

-fno-builtin implies -fno-tree-loop-distribute-patterns
(https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=b15458becf4086c463cba0c42db1d8780351201b),
-fno-builtin-strlen does not, but I think you are right that that does not
match the documentation.

(In reply to Georg-Johann Lay from comment #7)
> The documentation of -ftree-loop-distribute-patterns does not relate in any
> way to that.  It's impossible to find this option from a problem description.

-ffreestanding also implies -fno-tree-loop-distribute-patterns, and that option
is documented in a way that would help here. Unless -ffreestanding is used, GCC
assumes the presence of a conforming standard library. It may expand calls to
library functions based on knowledge of what these functions do, and it may
replace code by calls to library functions as well for the same reason.

-fno-builtin and -fno-builtin-(function) are both documented as stopping the
former, but not documented as stopping the latter. The fact that one does, but
not both, is surprising.

[Bug c++/113110] GCC rejects call to more specialized const char array version with string literal

2023-12-22 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113110

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #6 from Harald van Dijk  ---
(In reply to Jason Liam from comment #3)
> Are you sure? I mean if you add another template parameter `U` to the second
> parameter and use it then gcc starts accepting the code and using the more
> specialized version. Demo:https://godbolt.org/z/W7Ma6c5Ts

In order to determine whether int compare(const char (&)[N], const char (&)[M])
is more specialised than int compare(const T &, const T &), template argument
deduction is attempted to solve one in terms of the other. Solving gives T =
char[N] for the first parameter, but T = char[M] for the second parameter. This
is a conflict that is ignored by MSVC.

When adding the second template parameter `U`, solving gives T = char[N], and U
= char[M]. This is never a conflict, this unambiguously makes the array version
more specialised.

(In reply to Andrew Pinski from comment #5)
> I am still suspecting MSVC of not implementing the C++ Defect report 214 .

At first glance, both GCC/clang and MSVC behaviour look like legitimate but
different interpretations of DR 214. Whether MSVC is right to ignore that
conflict is an open question covered by
https://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#2160.

[Bug sanitizer/113628] -fsanitize=undefined failed to check a signed integer overflow

2024-01-27 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113628

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #1 from Harald van Dijk  ---
These two files are not equivalent. The equivalent would be
 long TVH = (g_106 / (g_51 ? g_51 : 16653417461));
because that is the type that subexpression has. The constant of type long
causes everything to be promoted to long, and then finally truncated to int.
That is well-defined. By making TVH an int, all the other operations are
performed in type int as well.

[Bug c++/113830] GCC accepts invalid code when instantiating the local class inside a function

2024-02-08 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113830

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #12 from Harald van Dijk  ---
(In reply to Bo Wang from comment #11)
> I have read the working draft standard of C++20
> (https://github.com/cplusplus/draft/tree/c%2B%2B20).
> 
> Following the subsection "13.9.2 Explicit instantiation" in the section
> "13.9 Template instantiation and specialization", the statement `template
> void f();` is an explicit instantiation, which requires instantiating
> everything in the function.

Where are you getting "everything in the function" from? It seems to say rather
the opposite in [temp.explicit]p14:

> An explicit instantiation does not constitute a use of a default argument, so 
> default argument instantiation is not done.

Now, the example shows that this was intended to apply to default arguments of
the function itself, but the actual wording does not limit it to that, so I
actually think this is a bug in clang, by the current wording this must be
accepted?

[Bug c++/113830] GCC accepts invalid code when instantiating the local class inside a function

2024-02-09 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113830

--- Comment #14 from Harald van Dijk  ---
(In reply to Bo Wang from comment #13)
> (In reply to Harald van Dijk from comment #12)
> > (In reply to Bo Wang from comment #11)
> > > I have read the working draft standard of C++20
> > > (https://github.com/cplusplus/draft/tree/c%2B%2B20).
> > > 
> > > Following the subsection "13.9.2 Explicit instantiation" in the section
> > > "13.9 Template instantiation and specialization", the statement `template
> > > void f();` is an explicit instantiation, which requires instantiating
> > > everything in the function.
> > 
> > Where are you getting "everything in the function" from? It seems to say
> > rather the opposite in [temp.explicit]p14:
> > 
> > > An explicit instantiation does not constitute a use of a default 
> > > argument, so default argument instantiation is not done.
> > 
> > Now, the example shows that this was intended to apply to default arguments
> > of the function itself, but the actual wording does not limit it to that, so
> > I actually think this is a bug in clang, by the current wording this must be
> > accepted?
> 
> Please refer to the example in Comment 9 which has no default arguments.

Okay, sure, but if we have established that the standard does not say
"everything in the function" needs to be instantiated, where does it say that
*this* needs to be instantiated?

> For the standard, I found this one in "13.9 Template instantiation and
> specialization" p6 of C++20, which requires access checking.

That explains that the special exception that generally applies to template
instantiations does not apply here. This means the usual rules apply, so for
instance, you can't refer to a private member of a class unless you're a
friend. But for templates, these usual rules apply upon instantiation, so we
still need to establish whether or not this is required to be instantiated.

[Bug c++/113760] [DR1693] gcc rejects valid empty-declaration in pedantic mode

2024-02-12 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113760

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #11 from Harald van Dijk  ---
(In reply to Marek Polacek from comment #8)
> -std=c++03 -pedantic-errors -Wextra-semi -> errors (?)

Speaking as a user: that makes sense to me, but I would also expect:

-std=c++03 -pedantic-errors -Wno-error=extra-semi -> warnings

[Bug c++/113760] [DR1693] gcc rejects valid empty-declaration in pedantic mode

2024-02-12 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113760

--- Comment #13 from Harald van Dijk  ---
(In reply to Marek Polacek from comment #12)
> Thank for your comment.  In the end I went with
> 
> -std=c++03 -pedantic-errors -Wextra-semi -> warnings
> -std=c++03 -pedantic -Wextra-semi -> warnings (not pedwarn)
> 
> based on the principle that a more specific option overrides a more general
> option.  This is also what clang++ does.  Granted, -Wvla in C doesn't behave
> like that...

That also makes sense. The more specific option overriding a more general
option is also the reasoning why I expect no error with -pedantic-errors
-Wno-error=extra-semi.

[Bug c/116631] [gcc] c23 - 'auto' struggles with comma expression type inference.

2024-09-07 Thread harald at gigawatt dot nl via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116631

Harald van Dijk  changed:

   What|Removed |Added

 CC||harald at gigawatt dot nl

--- Comment #4 from Harald van Dijk  ---
Note that although the standard's restriction is on structure member's names,
that is not what GCC implements. If GCC merely implemented that, it would
accept

auto a = ( sizeof (struct {}), 2 );

which is non-standard but using a documented and supported GCC extension (empty
structures), violating no other rule in either the C standard or GCC's
documentation, yet is still rejected: it causes the same error

error: 'struct ' defined in underspecified object initializer

The error makes sense, but is a mismatch between documentation and
implementation.

I have no opinion on whether the error should be removed (both for the original
example and this modified one), except that if this is kept an error, I would
ask for the documentation to be updated to match.