[Bug c/115729] case label does not reduce to an integer constant

2024-07-02 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115729

--- Comment #2 from Stas Sergeev  ---
> rejects-valid

You meant accepts-invalid?

Anyway, constexpr makes it consistent, thanks!

[Bug c/115729] New: case label does not reduce to an integer constant

2024-07-01 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115729

Bug ID: 115729
   Summary: case label does not reduce to an integer constant
   Product: gcc
   Version: 14.1.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

Created attachment 58550
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=58550=edit
preprocessed source

Compiling the attached preprocessed
source, gives lots of "case label does not reduce to an integer constant"
errors, unless optimization is enabled.

clang compiles that fine.
Also I tried to produce a test-case,
doing a `switch` on an uint64_t var
defined as `static const`, but it compiles.
So I don't know what exactly breaks it at -O0
in that preprocessed source.

[Bug c++/111923] default argument is not treated as a complete-class context of a class

2023-10-24 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111923

--- Comment #11 from Stas Sergeev  ---
So if I understand correctly, before
your proposal the following code was
conforming:

template 
struct B {
static constexpr int off = O();
};

struct A {
char a;
B<[]() static constexpr ->int { return offsetof(A, b); }> b;
};

due to 7.1 (function body) + note4 for
closure sub-class.
Do you know any specific reason why that
code should be disallowed, rather than
supported per the current standard?
After reinterpret_cast is constexpr was
also disallowed, it became quite challenging
to come up with the work-around.

[Bug c++/111923] default argument is not treated as a complete-class context of a class

2023-10-24 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111923

--- Comment #10 from Stas Sergeev  ---
OMG, not what I intended to get. :(
All I need is to use offsetof() in templates.
Last time I started to use reinterpret_cast
for that, you disallowed reinterpret_cast in
constexpr context. Now this...
Why is it such a big deal to disallow every
possible loop-hole for the use of offsetof()
in templates?

[Bug c++/111923] default argument is not treated as a complete-class context of a class

2023-10-24 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111923

--- Comment #8 from Stas Sergeev  ---
Added a few experts who can probably
answer that. While I do not doubt that
Andrew is right, I am sure having the
properly spelled explanation will help.

[Bug c++/111923] default argument is not treated as a complete-class context of a class

2023-10-23 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111923

--- Comment #7 from Stas Sergeev  ---
Also I verified your assumption in
comment #5 by this code:

struct A {
struct dummy {
static constexpr const int foo(const int off = offsetof(A, a)) { return
off; }
static constexpr const int operator()() { return foo(); }
};
static constexpr const int (*off_p)() = ::operator();

int t[off_p()];
char a;
};

It says:

error: size of array ‘t’ is not an integral constant-expression
   11 | int t[off_p()];

So it seems, "constexpr const" is not
enough to alter the structure size, so
your fears about that were likely wrong.

So could you please explain why note4 doesn't
apply to the nested closure type? Unless
there is a consistent explanation, the
chances are that the bug is missed. :(

[Bug c++/111923] default argument is not treated as a complete-class context of a class

2023-10-22 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111923

--- Comment #6 from Stas Sergeev  ---
(In reply to Andrew Pinski from comment #5)
> Nope, lamdba's are not a nested class.

But according to this:
https://timsong-cpp.github.io/cppwp/n3337/expr.prim.lambda#3
The type of the lambda-expression (which is also the type of the closure
object) is a unique, unnamed non-union class type — called the closure type —
whose properties are described below. This class type is not an aggregate
([dcl.init.aggr]). The closure type is declared in the smallest block scope,
class scope, or namespace scope that contains the corresponding
lambda-expression.

So am I right that defining lambda
in class A, means defining it in a
class's scope? In which case note4
should apply? What am I missing?

Additionally I've got this to compile:

struct A {
struct dummy {
static constexpr int foo(int off = offsetof(A, a)) { return off; }
static constexpr int operator()() { return foo(); }
};
static constexpr int (*off_p)() = ::operator();

int x;
char a;
};

Seems like note4 applies in that case.
But it should be very similar to the
"closure type" described above...

[Bug c++/111923] default argument is not treated as a complete-class context of a class

2023-10-22 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111923

--- Comment #4 from Stas Sergeev  ---
(In reply to Andrew Pinski from comment #3)
> One more note, default argument clause does not apply here as the it is not
> an argument of a method of that class but rather a different context (the
> lamdba definition context).

Yes, but doesn't this apply to the note4?
[Note 4: A complete-class context of a nested class is also a complete-class
context of any enclosing class, if the nested class is defined within the
member-specification of the enclosing class.
— end note]

Isn't lambda an instance of a nested class here?

> int off_p = offsetof(A, a);
> is well formed due to it being a DMI.

Thanks for info.
Do you happen to know the particular reason
why the standard disallows this for static
member? Shouldn't that be a DR?

[Bug c++/111923] New: default argument is not treated as a complete-class context of a class

2023-10-22 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111923

Bug ID: 111923
   Summary: default argument is not treated as a complete-class
context of a class
   Product: gcc
   Version: 13.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

The standard says:

https://eel.is/c++draft/class.mem.general#7.2

A complete-class context of a class (template) is a

(7.1)function body ([dcl.fct.def.general]),
(7.2)default argument ([dcl.fct.default]),
...
[Note 4: A complete-class context of a nested class is also a complete-class
context of any enclosing class, if the nested class is defined within the
member-specification of the enclosing class.
— end note]


I think this means that the following
code should compile:

struct A {
char a;
static constexpr int (*off_p)(int p) =
[](int off = offsetof(A, a)) static constexpr ->int { return off; };
};

But it fails with:

bad.cpp:6:22: error: invalid use of incomplete type ‘struct A’

Surprisingly, this code does actually compile:

struct A {
char a;
int (*off_p)(int p) =
[](int off = offsetof(A, a)) static constexpr ->int { return off; };
};

(the only difference is that off_p
is no longer static constexpr).
But I need this code to compile when
off_p is a static constexpr.

[Bug c++/109824] aligned attribute lost on first usage

2023-05-12 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109824

Stas Sergeev  changed:

   What|Removed |Added

 Status|RESOLVED|UNCONFIRMED
 Resolution|DUPLICATE   |---

--- Comment #3 from Stas Sergeev  ---
Andrew, why not to read the bug description
at least, or try what was said in it?
It says:

But comment out the line 9 under "ifndef BUG"
and it compiles without an error.

This has nothing to do with the bug you
referred as a duplicate.
Please indicate that you tried that, only
then close.

[Bug c++/109824] aligned attribute lost on first usage

2023-05-12 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109824

--- Comment #1 from Stas Sergeev  ---
Sorry, copied the output from wrong place.
The real error msg looks like this:

$ g++ -Wall -c a.cpp 
a.cpp: In member function ‘less_aligned_a& t1::get_ref()’:
a.cpp:17:16: error: cannot bind packed field ‘((t1*)this)->t1::i’ to
‘less_aligned_a&’ {aka ‘a&’}
   17 | return i;

[Bug c++/109824] New: aligned attribute lost on first usage

2023-05-12 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109824

Bug ID: 109824
   Summary: aligned attribute lost on first usage
   Product: gcc
   Version: 12.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

Created attachment 55063
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=55063=edit
test case

struct a {
short aa;
};
typedef struct a less_aligned_a __attribute__ ((aligned (1)));

static inline void foo(void)
{
#ifndef BUG
struct a aa __attribute__((unused));
#endif
}

class t1 {
less_aligned_a i;
public:
less_aligned_a _ref() {
return i;
}
} __attribute__((packed));

t1 ap;
less_aligned_a* a = _ref();

-

$ g++ -Wall -c a.cpp 
a.cpp: In instantiation of ‘less_aligned_a& t1::get_ref() [with T = int;
less_aligned_a = a]’:
a.cpp:23:32:   required from here
a.cpp:18:16: error: cannot bind packed field ‘((t1*)this)->t1::i’ to
‘less_aligned_a&’ {aka ‘a&’}
   18 | return i;



But comment out the line 9 under "ifndef BUG"
and it compiles without an error.

[Bug driver/109217] failure statically linking shared lib

2023-03-21 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109217

--- Comment #4 from Stas Sergeev  ---
(In reply to Richard Biener from comment #3)
> -static-pie is now marked as the negative of -shared, so it works with that
> (the later cancelling out the earlier).  It isn't handled that way for
> -static vs. -shared, not sure if we can use Negative() with multiple options.

So could you please suggest the exact
command line that works?
I tried:

$ LC_ALL=C cc -Wall -o libmain.so main.c -shared -static-pie
/usr/bin/ld:
/usr/lib/gcc/x86_64-linux-gnu/12/../../../x86_64-linux-gnu/rcrt1.o: in function
`_start':
(.text+0x1b): undefined reference to `main'
collect2: error: ld returned 1 exit status


So obviously -static-pie just indeed, as you
say, cancels -shared. Is there any -static-solib
or alike, to produce the solib at the end?

[Bug target/109217] failure statically linking shared lib

2023-03-20 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109217

--- Comment #1 from Stas Sergeev  ---
So as #7516 suggests, it is now indeed
rejected. :(
And at the same time clang has no problem
with that combination of options.
Please make that a valid option combination
again.

[Bug libgcc/109217] New: failure statically linking shared lib

2023-03-20 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109217

Bug ID: 109217
   Summary: failure statically linking shared lib
   Product: gcc
   Version: 12.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: libgcc
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

Use any dummy source like this:

void foo(void) {}

Then:
$ cc -Wall -o libmain.so -shared main.c -static
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/12/crtbeginT.o: relocation
R_X86_64_32 against hidden symbol `__TMC_END__' can not be used when making a
shared object
/usr/bin/ld: failed to set dynamic section sizes: bad value
collect2: error: ld returned 1 exit status

[Bug c++/108538] unexpected -Wnarrowing errors in -fpermissive mode

2023-01-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108538

--- Comment #4 from Stas Sergeev  ---
(In reply to Jonathan Wakely from comment #3)
> It seems like you might be expecting more from -fpermissive than it actually
> provides. It only affects a very limited set of diagnostics, and isn't a
> general "compile invalid code" switch.

I always used it to compile the
(valid) C code in C++ mode. I thought
that's what it is for. It violates the
C++ standard up and down. And that
-Wnarrowing case is "better" than others
because it was a warning in c++03.
Other problems that -fpermissive allows,
were always an errors in any c++ mode.

[Bug c++/108538] unexpected -Wnarrowing errors in -fpermissive mode

2023-01-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108538

--- Comment #2 from Stas Sergeev  ---
(In reply to Andreas Schwab from comment #1)
> It depends on the selected C++ standard.  C++11 does not allow narrowing
> conversions unconditionally.

Yes, I am not disputing that.
But I used -fpermissive mode to
compile the mix of c/c++.
-fpermissive downgrades many C++
errors to a warning, eating most
of the regular C. So my question
here is explicitly about -fpermissive
mode, I think it should downgrade
-Wnarrowing back into the warning.

[Bug c++/108538] New: unexpected -Wnarrowing errors in -fpermissive mode

2023-01-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108538

Bug ID: 108538
   Summary: unexpected -Wnarrowing errors in -fpermissive mode
   Product: gcc
   Version: 12.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

int main()
{
unsigned char a[1] = { -1 };
return a[0];
}

$ g++ -fpermissive nar.cpp 
nar.cpp: In function ‘int main()’:
nar.cpp:3:28: error: narrowing conversion of ‘-1’ from ‘int’ to ‘unsigned char’
[-Wnarrowing]
3 | unsigned char a[1] = { -1 };


While I know that some -Wnarrowing
warnings were promoted to an errors,
was it the right decision also in
-fpermissive mode, which accepts most
of the C code?

[Bug c/107477] New: spurious -Wrestrict warning

2022-10-31 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107477

Bug ID: 107477
   Summary: spurious -Wrestrict warning
   Product: gcc
   Version: 12.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

Created attachment 53804
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=53804=edit
test case

$ gcc -Wrestrict -O1 -c www.c 
In file included from /usr/include/string.h:535,
 from www.c:1:
In function ‘strcpy’,
inlined from ‘foo’ at www.c:11:5:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:79:10: warning:
‘__builtin_strcpy’ accessing 1 byte at offsets [0, 327680] and [0, 327680]
overlaps 1 byte at offset [0, 327679] [-Wrestrict]


Or:

$ gcc -Wrestrict -m32 -O2 -c www.c 
In file included from /usr/include/string.h:535,
 from www.c:1:
In function ‘strcpy’,
inlined from ‘foo’ at www.c:11:5:
/usr/include/bits/string_fortified.h:79:10: warning: ‘__builtin_strcpy’
accessing 1 byte at offsets [0, 327680] and [0, 327680] overlaps 1 byte at
offset [0, 327679] [-Wrestrict]
   79 |   return __builtin___strcpy_chk (__dest, __src, __glibc_objsize
(__dest)


It doesn't warn with -O2 without -m32.
And if we remove the lines 12,13 of a
test-case, then it never warns with -O2,
whether its -m32 or not.

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error and pthread_cancel

2022-10-18 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #18 from Stas Sergeev  ---
(In reply to Stas Sergeev from comment #5)
> And its running on a stack previously
> poisoned before pthread_cancel().

And the reason for that is because
the glibc in use is the one not built
with -fsanitize=address. When it calls
its __do_cancel() which has attribute
"noreturn", __asan_handle_noreturn()
is not being called. Therefore the
canceled thread remains with the
poison below SP.
I believe the glibc re-built with asan
would not exhibit the crash.

Note: all URLs above where I was pointing
to the code, now either are a dead links
or point to wrong lines. Its quite a shame
that such a bug remains unfixed after a
complete explanation was provided, but now
that explanation is rotten...

[Bug rtl-optimization/104777] [9/10 Regression] gcc crashes while compiling a custom coroutine library sample

2022-06-13 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104777

--- Comment #14 from Stas Sergeev  ---
(In reply to Uroš Bizjak from comment #13)
> Please backport the patch also to gcc-10 branch.

9.4.0 fails for me on ubuntu-20.
8.5.0 also fails.
Please back-port to all possible
branches.

[Bug rtl-optimization/105936] [10 Regression] ICE with inline-asm and TLS on x86_64 and -O2 in move_insn

2022-06-13 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105936

Stas Sergeev  changed:

   What|Removed |Added

 Resolution|DUPLICATE   |FIXED

--- Comment #6 from Stas Sergeev  ---
But no patch is back-ported,
and already closing the ticket?

[Bug gcov-profile/105936] New: internal compiler error: in move_insn, at haifa-sched.c:5463

2022-06-12 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105936

Bug ID: 105936
   Summary: internal compiler error: in move_insn, at
haifa-sched.c:5463
   Product: gcc
   Version: 9.4.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: gcov-profile
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
CC: marxin at gcc dot gnu.org
  Target Milestone: ---

Created attachment 53124
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=53124=edit
pre-processed source

The problem happens with 9.4.0
in ubuntu-20, and with more recent
gcc in ubuntu-21.10, but not sure
what exact version of gcc is there.

$ gcc -O2 -c -xc int.E

during RTL pass: sched2
/<>/build/../src/base/core/int.c: In function
‘int33_unrevect_fixup’:
/<>/build/../src/base/core/int.c:1746:1: internal compiler error:
in move_insn, at haifa-sched.c:5463

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error and pthread_cancel

2022-02-11 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #17 from Stas Sergeev  ---
I sent the small patch-set here:
https://lore.kernel.org/lkml/20220126191441.3380389-1-st...@yandex.ru/
but it is so far ignored by kernel developers.
Someone from this bugzilla should give me an
Ack or Review, or this won't float.

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error and pthread_cancel

2022-01-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #16 from Stas Sergeev  ---
I think I'll propose to apply something like this to linux kernel:

diff --git a/kernel/signal.c b/kernel/signal.c
index 6f3476dc7873..0549212a8dd6 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -4153,6 +4153,7 @@ do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned
long sp,
if (ss_mode == SS_DISABLE) {
ss_size = 0;
ss_sp = NULL;
+   ss_flags = SS_DISABLE;
} else {
if (unlikely(ss_size < min_ss_size))
ret = -ENOMEM;

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error and pthread_cancel

2022-01-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #15 from Stas Sergeev  ---
(In reply to Martin Liška from comment #14)
> Please report to upstream as well.

I'd like some guidance on how should that
be addressed, because that will allow to
specify the upstream.
I am not entirely sure that linux is doing
the right thing, and I am not sure man page
even makes sense saying that:
---
The old_ss.ss_flags may return either of the following values:

   SS_ONSTACK
   SS_DISABLE
   SS_AUTODISARM
---

... because what I see is the return of
"SS_DISABLE|SS_AUTODISARM", which is what I
write to flags for probing.
This is cludgy.
Does anyone know what fix should that get?

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error and pthread_cancel

2022-01-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #13 from Stas Sergeev  ---
Found another problem.
https://github.com/gcc-mirror/gcc/blob/master/libsanitizer/asan/asan_posix.cpp#L53
The comment above that line talks about
SS_AUTODISARM, but the line itself does
not account for any flags. In a mean time,
linux returns SS_DISABLE in combination
with flags, like SS_AUTODISARM. So the
"!=" check should not be used.

My app probes for SS_AUTODISARM by trying
to set it, and after that, asan breaks.
This is quite cludgy though.
Should the check be changed to
if (!(signal_stack.ss_flags & SS_DISABLE))
or maybe linux should not return any flags
together with SS_DISABLE?
man page talks "strange things" on that subject.

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error

2022-01-20 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #11 from Stas Sergeev  ---
The third bug here seems to be
that __asan_handle_no_return:
https://github.com/gcc-mirror/gcc/blob/master/libsanitizer/asan/asan_rtl.cpp#L602
also calls sigaltstack() before
unpoisoning stacks. I believe this
makes the problem much more reproducible,
for example the test-case with longjmp()
is likely possible too. I've found about
that instance by trying to call
__asan_handle_no_return() manually as a
pthread cleanup handler, in a hope to
work around the destructor bug. But it
appears __asan_handle_no_return() does
the same thing.
So the fix should be to move this line:
https://github.com/gcc-mirror/gcc/blob/master/libsanitizer/asan/asan_rtl.cpp#L607
above PlatformUnpoisonStacks() call.

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error

2022-01-19 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #9 from Stas Sergeev  ---
(In reply to Martin Liška from comment #8)
> Please report the problem to upstream libsanitizer project:
> https://github.com/llvm/llvm-project/issues

I already did:
https://github.com/google/sanitizers/issues/1171#issuecomment-1015913891
But URL is different, should I also report
that to llvm-project?

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error

2022-01-18 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #7 from Stas Sergeev  ---
Created attachment 52221
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=52221=edit
test case

This is a reproducer for both problems.

$ cc -Wall -o bug -ggdb3 -fsanitize=address bug.c -O1
to see the canary overwrite problem.

$ cc -Wall -o bug -ggdb3 -fsanitize=address bug.c -O0
to see the poisoned stack after pthread_cancel()
problem.

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error

2022-01-18 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #6 from Stas Sergeev  ---
I think the fix (of at least 1 problem here)
would be to move this line:
https://code.woboq.org/gcc/libsanitizer/asan/asan_thread.cc.html#109
upwards, before this:
https://code.woboq.org/gcc/libsanitizer/asan/asan_thread.cc.html#103
It will then unpoison stack before
playing its sigaltstack games.
But I don't know how to test that idea.

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error

2022-01-18 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #5 from Stas Sergeev  ---
Another problem here seems to be
that pthread_cancel() doesn't unpoison
the cancelled thread's stack.
This causes dtors to run on a
randomly poisoned stack, depending
on where the cancellation happened.
That explains the "random" nature of
a crash, and the fact that pthread_cancel()
is in a test-case attached to that ticket,
and in my program as well.

So, the best diagnostic I can come up
with, is that after pthread_cancel() we
have this:
---
#0  __sanitizer::UnsetAlternateSignalStack ()
at
../../../../libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cpp:190
#1  0x77672f0d in __asan::AsanThread::Destroy (this=0x7358e000)
at ../../../../libsanitizer/asan/asan_thread.cpp:104
#2  0x769d2c61 in __GI___nptl_deallocate_tsd ()
at nptl_deallocate_tsd.c:74
#3  __GI___nptl_deallocate_tsd () at nptl_deallocate_tsd.c:23
#4  0x769d5948 in start_thread (arg=)
at pthread_create.c:446
#5  0x76a5a640 in clone3 ()
at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
---

And its running on a stack previously
poisoned before pthread_cancel().
Then it detects the access to poisoned
area and is trying to do a stack trace.
But that fails too because the redzone
canary is overwritten.
So all we get is a crash.

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error

2022-01-18 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #4 from Stas Sergeev  ---
Thread 3 "X ev" hit Breakpoint 4, __sanitizer::UnsetAlternateSignalStack () at
../../../../libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cpp:190
190 void UnsetAlternateSignalStack() {
(gdb) n
194   altstack.ss_size = GetAltStackSize();  // Some sane value required on
Darwin.
(gdb) p /x $rsp
$128 = 0x7fffee0a0ce0
(gdb) p 
$129 = (stack_t *) 0x7fffee0a0d00
(gdb) p /x *(int *)0x7fffee0a0cc0  <== canary address
$130 = 0x41b58ab3
(gdb) p 0x7fffee0a0ce0-0x7fffee0a0cc0
$132 = 32

Here we can see that before a
call to GetAltStackSize(), rsp
is 32 bytes above the lowest
canary value. After the call,
there is no more canary because
32 bytes are quickly overwritten
by a call to getconf().

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error

2022-01-18 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

--- Comment #3 from Stas Sergeev  ---
Why does it check for a redzone
on a non-leaf function? GetAltStackSize()
calls to a glibc's getconf and that
overwrites a canary.
Maybe it shouldn't use/check the redzone
on a non-leaf function?

[Bug sanitizer/101476] AddressSanitizer check failed, points out a (potentially) non-existing stack error

2022-01-18 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101476

Stas Sergeev  changed:

   What|Removed |Added

 CC||stsp at users dot 
sourceforge.net

--- Comment #2 from Stas Sergeev  ---
I have the very same crash with the
multi-threaded app. The test-case from
this ticket doesn't reproduce it for
me either, but my app crashes nevertheless.
So I debugged it a bit myself.
gcc-11.2.1.

The crash happens here:
https://github.com/gcc-mirror/gcc/blob/master/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc#L10168
Here asan checks that sigaltstack()
didn't corrupt anything while writing
the "old setting" to "oss" ptr.
Next, some check is later fails here:
https://code.woboq.org/gcc/libsanitizer/asan/asan_thread.cc.html#340
Asan failed to find the canary value
kCurrentStackFrameMagic. The search
was done the following way: it walks
the shadow stack down, and looks for
the kAsanStackLeftRedzoneMagic to find
the bottom of redzone. Then, at the
bottom of redzone, it looks for the
canary value. I checked that the lowest
canary value is overwritten by the call
to GetAltStackSize(). It uses SIGSTKSZ
macro:
https://code.woboq.org/llvm/compiler-rt/lib/sanitizer_common/sanitizer_posix_libcdep.cpp.html#170
which expands into a getconf()
call, so eats up quite a lot.

Now I am not entirely sure what conclusion
can be derived out of that. I think that
the culprit is probably here:
https://code.woboq.org/gcc/libsanitizer/asan/asan_interceptors_memintrinsics.h.html#26
They say that they expect 16 bytes of
a redzone, but it seems to be completely
exhausted with all canaries overwritten.

Does something of the above makes sense?
This is the first time I am looking into
an asan code.

[Bug c++/104053] New: const variable not exported with O1

2022-01-16 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104053

Bug ID: 104053
   Summary: const variable not exported with O1
   Product: gcc
   Version: 11.2.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

const.cpp:
---
const int AAA=5;
---

Good run:
---
$ g++ -O0 -c -o const.o const.cpp 
$ nm const.o |c++filt
 r AAA
---

Bad run:
---
$ g++ -O1 -c -o const.o const.cpp 
$ nm const.o |c++filt

---

[Bug c/103502] -Wstrict-aliasing=3 doesn't warn on what is documented as UB

2021-11-30 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103502

--- Comment #7 from Stas Sergeev  ---
(In reply to Eric Gallager from comment #6)
> -Wstrict-aliasing is kind of confusing in this regards since it's different
> from how other warnings with numerical levels work. Normally a higher
> numerical value to a warning option means "print more warnings" but for
> -Wstrict-aliasing it means "try harder to reduce the number of warnings".

Number of warnings, or number
of false-positives?
This is what is still unclear
to me. If it reduces the number
of warnings (including valid ones,
by not applying some checks for
example), then indeed what you propose
can be done (or not done - it would
be rather straight-forward anyway).

But I had the following assumptions:
1. It reduces the number of only false-positives
2. It increases the amount of warnings by avoiding false-negatives
(i.e. by not "hiding" the warnings that lower
levels could miss)
3. The warning I've seen on lower levels was a valid one

I suppose what you propose, can
be done if 2 is not true.
I still don't know which of the
above wasn't true.

[Bug c/103502] -Wstrict-aliasing=3 doesn't warn on what is documented as UB

2021-11-30 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103502

--- Comment #5 from Stas Sergeev  ---
Note that this code example
is trivial. If the warning have
disappeared as a false-negative,
then I am surprised you close this
as NOTABUG, as there is definitely
something to fix or improve here.
Not detecting such a trivial case
is a bug.

If OTOH gcc actually deduced that
this code is safe, then I am more
than happy, but this have to be
confirmed explicitly.
You inserted the seemingly redundant
"not" in your sentence, so I am not
sure what you actually meant to say.

[Bug c/103502] -Wstrict-aliasing=3 doesn't warn on what is documented as UB

2021-11-30 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103502

--- Comment #4 from Stas Sergeev  ---
(In reply to Andrew Pinski from comment #3)
> Because GCC can optimize that pun+dereference pattern without _not_ breaking

Did you mean to say "without breaking the code"?
I will assume it is the case:

> the code, GCC decided it should not warn with =3.

So there is no breakage then?
Can I trust this no-warning?
Or what did the above "not" meant?

[Bug c/103502] -Wstrict-aliasing=3 doesn't warn on what is documented as UB

2021-11-30 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103502

--- Comment #2 from Stas Sergeev  ---
(In reply to Andrew Pinski from comment #1)
> I think you misunderstood what precise means in this context really.
> "Higher levels correspond to higher accuracy (fewer false positives). "

So was it a false-positive?

[Bug c/103502] New: -Wstrict-aliasing=3 doesn't warn on what is documented as UB

2021-11-30 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103502

Bug ID: 103502
   Summary: -Wstrict-aliasing=3 doesn't warn on what is documented
as UB
   Product: gcc
   Version: 11.2.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
---
Similarly, access by taking the address, casting the resulting pointer and
dereferencing the result has undefined behavior, even if the cast uses a union
type, e.g.:

int f() {
  double d = 3.0;
  return ((union a_union *) )->i;
}
---

Here is the test-case:
---
union a_union {
  int i;
  double d;
};

static int f() {
  double d = 3.0;
  return ((union a_union *) )->i;
}

int main()
{
return f();
}
---

It only warns at -Wstrict-aliasing=2 or 1,
but doesn't on 3. 3 is documented as the
most precise option. So obviously it should
warn on what the official gcc manual declares
as an UB.

Note: I very much wish such construct to
not be UB. Because of the lack of a warning
on -Wstrict-aliasing=3 I was freely using
such construct for type-punning for years...
until now I've found it invalid in gcc manual.
If it is actually valid, please fix the docs,
not gcc! :)

[Bug middle-end/98896] local label displaced with -O2

2021-01-30 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98896

--- Comment #9 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #7)
> you need to tell the compiler
> the asm can goto to that label.

Of course the one would wonder what else
could be done to the passed label. :)
Maybe some distance was calculated by
subtracting 2 labels, or alike. Maybe
it wasn't jump.
But why does it help to assume that something
passed to volatile asm, remains unused?
Just wondering.
IMHO at least it deserves a warning.

[Bug middle-end/29305] local label-as-value being placed before function prolog

2021-01-30 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=29305

Stas Sergeev  changed:

   What|Removed |Added

 CC||stsp at users dot 
sourceforge.net

--- Comment #11 from Stas Sergeev  ---
Just for a heads-up.
The solution to that problem was suggested here:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98896#c7

[Bug middle-end/98896] local label displaced with -O2

2021-01-30 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98896

--- Comment #8 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #7)
> It doesn't mean you can't use "r" (&),

Well, if not for Andrew telling exactly that
you can't, both here and in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=29305
then indeed, it doesn't.
Because this seems to work:
---
int main(void)
{
__label__ cont;
asm volatile goto (
"push %0\n"
"ret\n"
::"r"(&):"memory":cont);
cont:
return 0;
}
---

So... is this a correct, documented, supported etc
way of doing things, and it won't disappear in the
next gcc version?
Then perfectly fine.

Thanks for your help!

[Bug middle-end/98896] local label displaced with -O2

2021-01-30 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98896

--- Comment #6 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #5)
> I think Andrew meant asm goto, which you haven't tried.

You are right.
Thanks for mentioning that.
But it doesn't work as well:
---
int main(void)
{
__label__ cont;
asm volatile goto (
"push %l[cont]\n"
"ret\n"
cont);
cont:
return 0;
}
---

$ LC_ALL=C cc -Wall -ggdb3 -O2 -o jmpret2 jmpret2.c -pie -fPIE
/usr/bin/ld: /tmp/cc1UoxnD.o: relocation R_X86_64_32S against `.text.startup'
can not be used when making a PIE object; recompile with -fPIE


And in an asm file we see:
---
#APP
# 4 "jmpret2.c" 1
push .L2#
ret
---


Please compare this to the following:
---
int main(void)
{
__label__ cont;
asm volatile (
"push %0\n"
"ret\n"
::"r"(&));
cont:
return 0;
}
---

And its asm:
---
.L2:
.loc 1 4 5 view .LVU1
leaq.L2(%rip), %rax #, tmp83
#APP
# 4 "jmpret3.c" 1
push %rax   # tmp83
ret
---


So... it seems, only the second case can work,
and indeed does with clang?

[Bug middle-end/98896] local label displaced with -O2

2021-01-29 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98896

--- Comment #4 from Stas Sergeev  ---
I can achieve similar results with this:
---
void cont(void) asm("_cont");
asm volatile (
"push %0\n"
"ret\n"
"_cont:\n"
::"r"(cont));
---

But this doesn't work if the optimizer inlines
the function, as you then get multiple definitions
of "_cont".

[Bug middle-end/98896] local label displaced with -O2

2021-01-29 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98896

--- Comment #3 from Stas Sergeev  ---
I can't use inline-asm gotos because
I can't manipulate such a label in a portable way.
For example:
---
asm volatile (
"push $1f\n"
"ret\n"
"1:\n"
);
---

This won't work with -pie.
But if I do "r"(&) then the rip-relative
reference is generated so pie works.

> See PR 29305 and others too on why this is undefined.

>From that PR I can only see that its undefined
because the documentation didn't define such use,
at best. I am not sure it immediately means "undefined".
---
You may not use this mechanism to jump to code in a different function.
If you do that, totally unpredictable things will happen. The best way
to avoid this is to store the label address only in automatic variables
and never pass it as an argument. 
---
Not sure if I violated that.
I pass it as an argument to an inline asm only -
does this count as passing as an argument or what?
I suppose they meant "as an argument to a function".
Inline asm is not a function.

Anyway:
- What is the point to misplace the label that is obviously (for gcc) used?
- Why clang have no problem with that code?

[Bug c/98896] New: local label displaced with -O2

2021-01-29 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98896

Bug ID: 98896
   Summary: local label displaced with -O2
   Product: gcc
   Version: 11.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

The following code works as expected with
clang, but hangs with gcc with -O2 (works with -O1):

---
int main()
{
__label__ ret;
asm volatile ("jmp *%0\n" :: "r"(&) : "memory");
ret:
return 0;
}
---

As can be seen from a disasm, the label is put before asm block.

[Bug debug/97989] -g3 somehow breaks -E

2020-11-26 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #23 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #22)
> -S -fpreprocessed test.i will not work

It doesn't seem to support -fpreprocessed though.

Thanks for explanations and sorry about
naively attributing that effect to -P.

[Bug debug/97989] -g3 somehow breaks -E

2020-11-26 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #21 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #19)
> It is just that clang doesn't support -g3 at all, as can be seen by clang
> not producing any .debug_macinfo nor .debug_macro sections.

So with -fdebug-macro it actually produces
.debug_macinfo, but still no .debug_macro.
Yet gdb is quite happy with that and can see
the macros.

So...
Why "clang -g3 -fdebug-macro -E -Wp,-P - 

[Bug debug/97989] -g3 somehow breaks -E

2020-11-26 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #20 from Stas Sergeev  ---
Ah, makes sense, thank you.
I was always wondering why under clang I
need to do "-fdebug-macro" for that (which
makes problems for gcc as being an unknown option).

But "clang -g3 -fdebug-macro -E -Wp,-P - 

[Bug debug/97989] -g3 somehow breaks -E

2020-11-26 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #18 from Stas Sergeev  ---
IMHO the only thing that makes sense,
is whether or not this is useful in practice.
If there are no practical cases for current
"-g3 -P" behaviour, then to me the fact that
its documented that way, is more or less irrelevant. :)
Besides, not every extension contradicts the
documentation. If you extend -P that way, it
will still suppress the line numbers, perfectly
as documented before, so no old use-cases are
supposed to be broken.

> clang simply decided not to implement the documented
> switches the way they were documented.

But in a way that is most useful in practice. :)
But whatever.
I know such arguments (practical use vs documentation)
were raised 1024+ times here, so not trying to say
something new.

[Bug debug/97989] -g3 somehow breaks -E

2020-11-26 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #16 from Stas Sergeev  ---
What do you think about, in addition
to your current patch, to also change
-P to disable debug?
Looks more user-friendly and clang-compatible?

[Bug debug/97989] -g3 somehow breaks -E

2020-11-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #14 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #13)
> Because without the -dD implicitly added for -g3 the -g3 option can't work
> as documented, in particular record the macros in the debug information. 
> Because they would be irrecoverably lost during the preprocessing phase.

I think there is a bit of misunderstanding.
The question is:
I re-checked and found that not only "gcc -g3"
sets -dD for cpp (which I understand), but also
"cpp -g3" does the same. Raw cpp, not gcc.
Should cpp react on -gX directly, rather than
on an implicitly set -dD?

But:
$ gcc -Wp,-g3 -E -Wp,-P -xc - 

[Bug debug/97989] -g3 somehow breaks -E

2020-11-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #12 from Stas Sergeev  ---
Will your patch also fix this:
$ cpp -g3 -P -xc -g0 - 

[Bug debug/97989] -g3 somehow breaks -E

2020-11-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #10 from Stas Sergeev  ---
Ah, cool, thanks.
Should this be re-opened?

[Bug debug/97989] -g3 somehow breaks -E

2020-11-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #8 from Stas Sergeev  ---
Thanks, but what will this patch do?
Will it allow the trailing -g0, or what?

For example if you implement -d0 or alike to undo
the effect of previously specified -dD, will this
still break the release branches? I suppose not?

[Bug debug/97989] -g3 somehow breaks -E

2020-11-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #6 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #5)
> Then they just make bad assumptions.  You can do:
> cc -E -Wp,-P $CFLAGS -g0
> instead if you are sure CFLAGS don't include the -d[DMNIU] options nor e.g.
> -fdirectives-only.

$ gcc -g3 -E -Wp,-P -xc -g0 - 

[Bug debug/97989] -g3 somehow breaks -E

2020-11-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

--- Comment #4 from Stas Sergeev  ---
Jakub, people use "cc -E -Wp,-P $CFLAGS" as a generic
preprocessor. $CFLAGS is needed to specify the includes,
but all other options do never affect -E.
But if CFLAGS contains -g3, you suddenly can't do that!

> -g3 enables -dD

Not letting people to use "cc -E -Wp,-P $CFLAGS" as a
generic preprocessor, is a very bad idea.
If -g3 enables -dD, then perhaps -P should disable it?

[Bug debug/97989] New: -g3 somehow breaks -E

2020-11-25 Thread stsp at users dot sourceforge.net via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97989

Bug ID: 97989
   Summary: -g3 somehow breaks -E
   Product: gcc
   Version: 10.2.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: debug
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

$ gcc -E -Wp,-P -xc - 

[Bug c/201] Switch statement will not accept constant integer variable as case label

2020-08-24 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=201

Stas Sergeev  changed:

   What|Removed |Added

 CC||stsp at users dot 
sourceforge.net

--- Comment #6 from Stas Sergeev  ---
Is there some switch to enable that as an extension?

[Bug c++/84194] fails to pack structs with template members

2020-03-03 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84194

Stas Sergeev  changed:

   What|Removed |Added

Version|7.2.1   |9.2.1

--- Comment #2 from Stas Sergeev  ---
Why it is still unconfirmed?

[Bug c++/93984] New: spurious Wclass-conversion warning

2020-02-29 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93984

Bug ID: 93984
   Summary: spurious Wclass-conversion warning
   Product: gcc
   Version: 9.2.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

#include 

struct D;
struct B {
virtual operator D() = 0;
};
struct D : B
{
operator D() override { std::cout << "conv" << std::endl; return D(); }
};
.
int main()
{
D obj;
B& br = obj;
(D)br; // calls D::operator D() through virtual dispatch
return 0;
}

$ LC_ALL=C g++ -Wall -o vconv vconv.cpp 
vconv.cpp:9:5: warning: converting 'D' to the same type will never use a type
conversion operator [-Wclass-conversion]
9 | operator D() override { std::cout << "conv" << std::endl; return
D(); }
  | ^~~~


$ ./vconv 
conv


The example above shows that the warning
is spurious. Converting to the same type
will indeed never use the conversion
operator. But the above case converts
from B to D, so the warning does not apply.
It may be quite difficult to check properly,
but in this particular example the "override"
keyword is a clear hint that it is going
to be used via the virtual dispatch.

[Bug c++/49171] [C++0x][constexpr] Constant expressions support reinterpret_cast

2020-02-12 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=49171

--- Comment #26 from Stas Sergeev  ---
(In reply to Jonathan Wakely from comment #23)
> What you want (and what everybody I've seen asking for similar things)

But comment #17 shows the different use of
reinterpret_cast - offsetof in templates.
What work-around would you suggest for that case?

> A more limited extension that solves the problem is a lot more reasonable.

If it would have been done before the feature
is removed, and for every possible use-case, then
yes. :)
In any case, what limited extension would you
suggest for the offsetof case?
Would something like the below[1] considered as such extension
(currently doesn't compile):
---
template 
struct B {
static constexpr int off = O();
};

struct A {
char a;
B<[](){ return offsetof(A, a); }> b;
};
---

Below is the very similar code[2] that actually compiles:
---
template 
struct B {
static constexpr int off = O();
};

struct A {
char a;
static constexpr int off(void) { return offsetof(A, a); }
B b;
};
---

So given that [2] compiles and works, and [1]
can be used as a limited work-around for the
offsetof case (at least in my case [1] is enough),
I wonder if it can be considered as a possible extension.

[Bug c++/49171] [C++0x][constexpr] Constant expressions support reinterpret_cast

2019-11-13 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=49171

Stas Sergeev  changed:

   What|Removed |Added

 CC||stsp at users dot 
sourceforge.net

--- Comment #17 from Stas Sergeev  ---
The following code now breaks:
---
#include 
#include 

template 
struct offset_of {
constexpr operator size_t() const {
return (std::uintptr_t)&(static_cast(nullptr)->*M);
}
};
template 
struct B {
char aa[10];
static const int off = offset_of();
};

struct A {
char a;
char _mark[0];
B b;
};

int main()
{
A a;
std::cout << "size " << sizeof(A) << " off " << a.b.off << std::endl;
return 0;
}
---

There is no other way of making offsetof(), rather
than by a reinterpret cast in constexpr (unless you
try very hard - I was still able to work around these
new gcc checks, but its tricky).

Would you consider adding this into -fpremissive?
Suddenly removing an extensions that people got used to,
is not the best treatment of users, AFAICT. Isn't -fpermissive
a right place for the extensions that people got used to?

[Bug c++/83732] wrong warning about non-POD field

2019-10-31 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83732

--- Comment #8 from Stas Sergeev  ---
(In reply to Jonathan Wakely from comment #7)
> Using the non-standard packed attribute already makes the code non-portable.

It may be non-standard, but its still portable
as long as all compilers agree on implementing
the particular extension. And the "packed" extension
is AFAIK the very old one and most widely used.
Unsupporting it is far from good decision.
Non-standard things should not be automatically
treated as "non-portable" IMO.

Kenman Tsang:
This bug was initially not about the wrong object
size. It was about the wrong diagnostic that says
"ignoring packed attribute" but actually packs an
object perfectly well. Your example demonstrates
the case where the "packed" attribute is really
ignored (and the diagnostic is in line with that),
so this is a different problem.
For which I opened another ticket:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84194
You may want to join that ticket, leaving this one
just for the diagnostic problem.

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-08 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #36 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #35)
> what you want.  I'm familiar with many of the details through having 
> written multiple such build systems myself.

But even you do make the wrong expectations.
Most of your suggestions (which are very reasonable by
themselves) do not work or require additional tweaks.
configure reacts inadequately on them.

> The non-sysroot form of configuring cross toolchains is to a large extent 
> considered a *legacy* way to configure such a toolchain and so has 
> received less attention to making it feature-complete in its interactions 
> with other features (e.g. building with installed libc at a different 
> path) because most people prefer to use sysroots.

OK, that's an interesting point.
I didn't know its legacy.
The problem is that I still do not fully understand
that route.

Does --with-sysroot disable prefix, or prefix should
be set to '/' explicitly? Or to something else?

If I naively do
--with-build-sysroot=$DESTDIR --with-sysroot=/
why this won't work? I suppose there will be some
differences in a directory layout, like no $target
dir, but otherwise it can work?

You suggest --with-sysroot=$prefix/$target but I
don't understand why is so, as normally I have
(1) $prefix/bin/$target-gcc
and
(2) $prefix/$target/bin/gcc
If I set --with-sysroot=$prefix/$target then I
can get (2), but how to get (1)?

I think this is an rtfm point, is this sysroot
trick documented anywhere?

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-08 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #34 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #33)
> to, you can also make your build system set all the variables such as CC 
> and CXX that are needed for the host).

As well as AS, LD and all the rest?
But that defeats the entire purpose of configure.
I need it to work on my PC, on launchpad build-farm,
and who knows where else. So I would need to write
a supplementary configure script to just fill in these
variables.
OTOH the new option could affect it exactly the way
that configure would treat the newly-built compiler
as alien (as if $host!=$build) and yet get the
unprefixed tools. And will not require me to evaluate
the (im)proper --build with manual uname probes (in
fact, I don't do uname probes, I instead grep it from
the stage1 configure log and fix up with sed).
Why not to fix the mess, it just seems like its broken
literally everywhere we look in... (if the total lack
of controlling switches can stand for being "broken").

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-08 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #32 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #29)
> A common way of doing that is to make $host and $build textually different 
> (after passing through config.sub) while still logically the same.  E.g. 
> x86_64-pc-linux-gnu versus x86_64-unknown-linux-gnu.

OK, I did the following:

$ ../gnu/gcc-9.2.0/configure --disable-plugin --enable-lto --disable-libssp
--disable-nls --enable-libquadmath-support --enable-version-specific-runtime-l
ibs --enable-fat --enable-libstdcxx-filesystem-ts --target=i586-pc-msdosdjgpp
--build=x86_64-unknown-linux-gnu --host=x86_64-pc-linux-gnu --enable-languages
=c,c++ --prefix=/usr --with-build-time-tools=/home/stas/src/build-gcc/build/tm
pinst/usr


After which we can see in the log:

configure:2360: checking build system type
configure:2374: result: x86_64-unknown-linux-gnu
configure:2421: checking host system type
configure:2434: result: x86_64-pc-linux-gnu
configure:2454: checking target system type
configure:2467: result: i586-pc-msdosdjgpp

... which looks correct? But the reaction was absolutely
inadequate. It started to look for fully-prefixed host tools:

configure:8235: checking for x86_64-pc-linux-gnu-ar
configure:8265: result: no
configure:8376: checking for x86_64-pc-linux-gnu-as
configure:8406: result: no
configure:8517: checking for x86_64-pc-linux-gnu-dlltool
configure:8547: result: no
configure:8658: checking for x86_64-pc-linux-gnu-ld
configure:8688: result: no

And of course found nothing.
If I do not explicitly specify --host, then it
takes $host from $build, so no matter if I altered
the --build, the $host and $build would match.
So they either match, or it looks for the fully-prefixed
host tools...

Any solution to this one? :)

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-07 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #31 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #29) 
> A common way of doing that is to make $host and $build textually different 
> (after passing through config.sub) while still logically the same.  E.g. 
> x86_64-pc-linux-gnu versus x86_64-unknown-linux-gnu.

And being a trick, it appears non-trivial.
I would want the CPU part to be the same.
I.e.
x86_64-pc-linux-gnu --> x86_64-unknown-linux-gnu
i686-pc-linux-gnu --> i686-unknown-linux-gnu

The problem here is that I can't hard-code $host
to any of that value. It must be evaluated from
$build somehow. Do you have a trick also for that?
Probing manually for uname before configure?
Or maybe its time to add a new option after all? :)

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-07 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #30 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #29)
> A common way of doing that is to make $host and $build textually different 
> (after passing through config.sub) while still logically the same.  E.g. 
> x86_64-pc-linux-gnu versus x86_64-unknown-linux-gnu.

Thanks, I'll explore that and will post back in a day or 2.
But... this one and your sysroot suggestion are _tricks_.
I wonder why the one should use tricks for just the basic
task of building a cross-compiler, this just looks strange.

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-07 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #28 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #22)
> The build system design is that where A and B are both built at the same 
> time, and the build of B uses A, it should use the *newly built* copy of A 
> as long as that is for _$host = $build_.

Indeed! I missed this "$host = $build" part initially.
When I build djgpp that _runs_ under DOS, then it is not
used during the compilation, as that would simply be impossible.
So such config is already supported.
Is there any way to convince the build system that the
resulting compiler is alien and cannot be used? I think
$host = $build check is just insufficient; there may be
more cases when the resulting compiler can't be used on
a build system, like different prefix.

If there is no such option currently, what do you think
about replacing the
if test "${build}" != "${host}"
with
if test "${build}" != "${host}" -o "$non_native_build"
or something like this?

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-03 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #27 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #26)
> On Thu, 3 Oct 2019, stsp at users dot sourceforge.net wrote:
> 
> > Ah, I am starting to understand.
> > So basically you mean something like this:
> > --with-sysroot=$prefix/$target --with-build-sysroot=$DESTDIR$prefix/$target
> > --with-prefix=/
> 
> It's --with-native-system-header-dir=/include not --with-prefix=/, but 
> that's the idea.

Ah, yes, indeed.
So can I assume that in the -isystem path and -B path,
the sysroot part will be swapped with the build_sysroot
during build? If so, then I would finally understand your
sysroot suggestion.

But there are still 2 more suggestions, yours and mine,
and I am not sure what is to do with them. I think your
suggestion is to add something like --with-build-time-prefix
to be able to specify the alternative location of headers
and libs for non-sysroot build.
Mine is to add --with-build-time-compiler=dir.
I think we can have both, and I can try to implement mine
(--with-build-time-compiler) if you think this is acceptable
(i.e. the _possibility_ to add an extra build stage is acceptable).

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-03 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #25 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #24)
> > But isn't there always a possibility to add
> > one more stage? Say, in the example above where
> > at stage1 we only have a static-only compiler,
> > we could add stage2 and stage3. stage2 is a fully-featured
> > compiler to build stage3. I think this approach
> > will always work, just use N+1 stages.
>
> It's desirable to reduce the number of stages, not increase it.

I think it depends. :) So if someone wants to
increase the amount of stages, why not to support
also that, together with the approaches you propose.
To me, its all about flexibility.

> (Bootstrapping such a GNU/Linux toolchain used to take three stages, which 
> was successfully reduced to two by fixing glibc to build with the 
> first-stage GCC.)

Reduced amount of stages is also good to support. :)
Why not to implement both ways?

> The --with-build-sysroot option gives the location of a directory that 
> uses the sysroot layout, which is not the same as that of a non-sysroot 
> $exec_prefix/$target - unless your non-sysroot layout does not use /usr.
> 
> If you set up the toolchain so that it thinks /include rather than 
> /usr/include is the native system header directory, then you can use 
> --with-sysroot and --with-build-sysroot without any temporary usr -> . 
> symlinks.

Ah, I am starting to understand.
So basically you mean something like this:
--with-sysroot=$prefix/$target --with-build-sysroot=$DESTDIR$prefix/$target
--with-prefix=/

This way we simulate the cross-layout and also are
changing the build pathes of headers and libs at once.
And if we use the default prefix, /usr, then we'd also
need to symlink everything back to $sysroot/ because,
while buildable even w/o the symlinks, we'll get the
wrong target layout, right? In which case we can add
the symlinks after the build is completed, correct?

> Those -isystem paths are the *non-sysroot* kind of paths for headers
> for a cross compiler.

So do you mean that on a sysroot build those -isystem
will look much differently? How exactly? Or will they look
similar but alterable with --with-build-sysroot? Knowing how
-isystem will behave in a sysroot case is the last piece of
the puzzle for me concerning your suggested solution.

So your suggestion is to have some option like --with-build-time-prefix,
which can be set to $DESTDIR$prefix, right? In this case
the compiler will replace $exec_prefix/$target with
$build_time_prefix/$target during build, right?

But the question is still, why not to support both
ways, if adding an extra stage is also the legal and
simple way of getting the work done.

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-03 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #23 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #22)
> On Thu, 3 Oct 2019, stsp at users dot sourceforge.net wrote:
> And overriding like that is fundamentally unsafe, because in general in a 
> multi-stage build (such as for a cross to GNU/Linux where the first stage 
> is a static-only C-only compiler) the libraries have to be built with the 
> more-fully-featured compiler built in the same stage - not with the 
> previous stage's compiler.

But isn't there always a possibility to add
one more stage? Say, in the example above where
at stage1 we only have a static-only compiler,
we could add stage2 and stage3. stage2 is a fully-featured
compiler to build stage3. I think this approach
will always work, just use N+1 stages.

> Then maybe an option is needed to find both headers and libraries in the 
> non-sysroot case (where the option for libraries gives the top-level 
> directory under which subdirectories for each multilib, using the multilib 
> OS suffix, can be found).  An option to find the build-time equivalent of 
> $exec_prefix/$target, with lib and include subdirectories, say.

And then why such an option is not called --with-build-sysroot?
In comment #11 you say
"there is no non-sysroot-headers equivalent of --with-build-sysroot",
but I don't understand what do you mean. Can we use --with-build-sysroot
w/o --with-sysroot, making it exactly an option you describe
above?

> The build system design is that where A and B are both built at the same 
> time, and the build of B uses A, it should use the *newly built* copy of A

Lets do A --> B' --> B then. :)

> "make all-target-libstdc++-v3" or whatever).  In general, if you don't 
> want to build with the newly built copy of A you should configure and 
> build in such a way that there isn't a newly built copy of A at all.

Mm, yes, I was thinking about renaming the dirs
during build to hide stuff from configure, but
decided against doing so as being too hackish.
Would you suggest this way for the wide adoption?

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-03 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #21 from Stas Sergeev  ---
Hi Joseph, thanks for your assistance!

(In reply to jos...@codesourcery.com from comment #20)
> The only case where the newly built GCC should be overridden is the 
> Canadian cross case,

Since today, this is no longer true. :)
https://code.launchpad.net/~stsp-0/+recipe/djgpp-daily
I managed to get the build working, and this build
only works when possibility exist to override the
in-tree tools.
It works as follows:
- The stage1 cross-compiler is built with --prefix=${DESTDIR}${stash_area}
and installed with DESTDIR unset (it is already in the prefix).
This is a non-sysroot build so it can work on host.
- The stage2 compiler is built with --prefix=/usr and
installed with the DESTDIR set to the build dir. As the
result, this stage2 compiler can't build its libs! (libgcc, libstdc++)
It can't build its libs because it is never installed
into its prefix dir on host, and so I override its
in-tree tools with the ones from the stage1 compiler.

> Your problem as originally described was with finding non-sysroot headers.

Yes, I attempted the 1-stage build back then.
But why not to support the 2-stage build, as this is
what I already have? It only required a tiny patch
above, and since it can't be applied as-is, I can take
a look into making it a separate option. What I want to
point out is that there is already the use for such option,
because it is already used in my build (in the form of
the --with-build-time-tools hack for now, but can be extended).

> A plausible approach to fixing that if you can't use sysroots is to add a 
> a new configure option whose purpose is to point to the build-time 
> non-sysroot location of headers that should be used in building target 
> libraries.

I think I tried this already, see comment #10.
I did a hack to change the header pathes. And
that worked, but then there are those -B options
that prevented the libs from being found during
configure process. So I found the change-headers-path
approach infeasible and implemented a 2-stage solution.
Do you still think that the path-altering games
can lead to a solution?
And since I already succeeded with overriding the
in-tree tools, why not to implement that route as
a new configure option? It looks very simple.

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-02 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #19 from Stas Sergeev  ---
OK, but the setup when you want to override
the newly-built gcc, is also needed. Like, when
you want to build the "destdir" gcc with the one
installed directly into prefix (and therefore
working fine on host).
Would you suggest a new option for that?
I've seen the variables CC_FOR_TARGET and alike,
but they do not work that way too. So I think
currently there is no way of doing that, and
so some patch should be created. How should it
look like?

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-02 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #17 from Stas Sergeev  ---
Created attachment 46991
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=46991=edit
the fix

Attached is the patch that I think is correct.
It also seems to work properly, i.e. the full
build process passes (previous patches were only
tested for the part of the build process that
was failing).

This patch allows --with-build-time-tools=
to override the in-tree compiler, which I
think this option is for.
Please let me know if it is good or bad.

[Bug other/91879] --with-build-time-tools doesn't work as expected

2019-10-02 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

Stas Sergeev  changed:

   What|Removed |Added

 Status|RESOLVED|NEW
 Resolution|INVALID |---
Summary|--with-build-sysroot|--with-build-time-tools
   |doesn't work as expected|doesn't work as expected

[Bug other/91879] --with-build-sysroot doesn't work as expected

2019-10-02 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #16 from Stas Sergeev  ---
(In reply to Jonathan Wakely from comment #15)
> For the record, this has moved to
> https://gcc.gnu.org/ml/gcc-help/2019-10/msg2.html

Thanks, I also would like to apologize to Joseph for
not following his suggestion and instead keeping to
fight with the gcc build system. I can't help feeling
there is a bug here, and I think I am quite close to
getting it solved, so I'd be upset having to change the
approach after so many investigations, at least for now.

I've found this patch of Paolo Bonzini:
https://gcc.gnu.org/ml/gcc-patches/2006-01/msg00823.html
---
(GCC_TARGET_TOOL): Do not use a host tool if we found a target tool
with a complete path in either $with_build_time_tools or $exec_prefix.
---

That made me think that --with-build-time-tools=
should override the in-tree tools, but it is
not what actually happens. More details here:
https://gcc.gnu.org/ml/gcc-help/2019-10/msg2.html
Basically, the idea was to build one djgpp on host,
and then use that one for building the target libs
in another build tree. But it doesn't pick up the
host-installed tools.
After looking at the Paolo's patch, I tried the following
change:

--- config/acx.m4.old  2019-10-02 02:39:31.976773572 +0300
+++ config/acx.m4  2019-10-02 02:08:57.223563920 +0300
@@ -522,7 +522,7 @@
   fi
 else
   ifelse([$4],,,
-  [ok=yes
+  [ok=no
   case " ${configdirs} " in
 *" patsubst([$4], [/.*], []) "*) ;;
 *) ok=no ;;


And indeed got the build done, as --with-build-time-tools=
now found all the host tools in preference of the in-tree tools.
So currently I suspect Paolo's patch is at fault.

[Bug other/91879] --with-build-sysroot doesn't work as expected

2019-09-25 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

Stas Sergeev  changed:

   What|Removed |Added

 Status|WAITING |RESOLVED
 Resolution|--- |INVALID

--- Comment #14 from Stas Sergeev  ---
OK, thanks, lets close this.
If I won't succeed, I'll use ML.

[Bug other/91879] --with-build-sysroot doesn't work as expected

2019-09-25 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #12 from Stas Sergeev  ---
(In reply to jos...@codesourcery.com from comment #11)
> Those -isystem paths are the *non-sysroot* kind of paths for headers for a 
> cross compiler.

Unfortunately I wasn't able to fully understand the
idea you explain. You mention "sysroot" and "non-sysroot"
builds, and you suggest to use --with-build-sysroot=
together with --with-sysroot= (without providing an example
of how to set --with-build-sysroot=, so I assume it should
be equal to $DESTDIR). May I assume that in the "sysroot"
build all the -isystem pathes will be changed the way the
sysroot prefix is replaced with build-sysroot prefix during
libs build?


OTOH I found a very simple way of implementing the Andrew Pinski
suggestion. Here it is:
---
--- configure.ac.old2019-09-25 17:00:34.973324924 +0300
+++ configure.ac2019-09-25 17:45:33.958152993 +0300
@@ -2327,6 +2327,7 @@
  [use sysroot as the system root during the build])],
   [if test x"$withval" != x ; then
  SYSROOT_CFLAGS_FOR_TARGET="--sysroot=$withval"
+ build_sysroot_path=${withval}
fi],
   [SYSROOT_CFLAGS_FOR_TARGET=])
 AC_SUBST(SYSROOT_CFLAGS_FOR_TARGET)
@@ -2573,7 +2574,7 @@
 # Some systems (e.g., one of the i386-aix systems the gas testers are
 # using) don't handle "\$" correctly, so don't use it here.
 tooldir='${exec_prefix}'/${target_noncanonical}
-build_tooldir=${tooldir}
+build_tooldir=${build_sysroot_path}${tooldir}

 # Create a .gdbinit file which runs the one in srcdir
 # and tells GDB to look there for source files.
--

What do you think guys? Does it break something?
It definitely works for me.

[Bug other/91879] --with-build-sysroot doesn't work as expected

2019-09-24 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #10 from Stas Sergeev  ---
(In reply to Jonathan Wakely from comment #9)
> It's possible the paths passed to -isystem should be prefixed with = when a
> sysroot is in use,

Great idea! Maybe it can even be unconditional,
as w/o --sysroot it won't change anything?
I hacked that solution in. Also I've found that
I was setting --with-build-sysroot wrongly in the
previous example: I included prefix into it, but
it shouldn't be there.

Now with your = idea I've got further only for a
tiny bit. It fails on the same conftest as before.
It now finds the headers! But fails to find libs...
---
configure:4135:
/home/stas/src/build-gcc/build/djcross-gcc-9.2.0/djcross/./gcc/x
gcc -B/home/stas/src/build-gcc/build/djcross-gcc-9.2.0/djcross/./gcc/
-B/usr/loc
al/cross/i586-pc-msdosdjgpp/bin/ -B/usr/local/cross/i586-pc-msdosdjgpp/lib/
-isy
stem=/usr/local/cross/i586-pc-msdosdjgpp/include
-isystem=/usr/local/cross/i586-
pc-msdosdjgpp/sys-include --sysroot=/home/stas/src/build-gcc/ttt   -o conftest
-
g -O2   conftest.c  >&5
/home/stas/src/build-gcc/ttt/usr/local/cross/bin/i586-pc-msdosdjgpp-ld: cannot
f
ind crt0.o: No such file or directory
/home/stas/src/build-gcc/ttt/usr/local/cross/bin/i586-pc-msdosdjgpp-ld: cannot
f
ind -lc
collect2: error: ld returned 1 exit status
---

If I correct -B options above, then it links!
Any firther ideas? :) We are really getting close
to the solution.


> but prefixing them with $DESTDIR is definitely wrong.

I renamed the ticket. Sorry for misleading conclusions.

[Bug other/91879] DESTDIR support seems incomplete

2019-09-24 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #8 from Stas Sergeev  ---
(In reply to Andrew Pinski from comment #7)
> Have you looked into --with-build-sysroot ?

Thanks! Very helpful.
But now it has the same problem when configuring libstdc++:
---
configure:4574:
/home/stas/src/build-gcc/build/djcross-gcc-9.2.0/djcross/./gcc/x
gcc -B/home/stas/src/build-gcc/build/djcross-gcc-9.2.0/djcross/./gcc/
-B/usr/loc
al/cross/i586-pc-msdosdjgpp/bin/ -B/usr/local/cross/i586-pc-msdosdjgpp/lib/
-isy
stem /usr/local/cross/i586-pc-msdosdjgpp/include -isystem
/usr/local/cross/i586-
pc-msdosdjgpp/sys-include
--sysroot=/home/stas/src/build-gcc/ttt/usr/local/cross
-c -g -O2  conftest.c >&5
conftest.c:10:10: fatal error: stdio.h: No such file or directory
   10 | #include 
  |  ^
---

As you can see, it added the correct --sysroot.
But unfortunately -isystem is still unaffected.
If I change -isystem in the command line above, then
conftest.c can be compiled.


Here's the full top-level configure invocation:
---
../gnu/gcc-9.2.0/configure --disable-plugin --enable-lto --disable-libssp --
disable-nls --enable-libquadmath-support --enable-version-specific-runtime-libs
--enable-fat --enable-libstdcxx-filesystem-ts
--with-build-sysroot=/home/stas/sr
c/build-gcc/ttt/usr/local/cross --target=i586-pc-msdosdjgpp
--prefix=/usr/local/
cross --enable-languages=c,c++
---

As you can see, I added --with-build-sysroot.

[Bug other/91879] DESTDIR support seems incomplete

2019-09-24 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #6 from Stas Sergeev  ---
(In reply to Jonathan Wakely from comment #5)
> Which makes sense, since the system headers are not part of GCC itself, so
> why would it expect them in the temporary staging area for GCC's own files?

OK, I understand.
But then its a bit unclear to me how to build
the cross-compiler. I build both gcc and its libs
on the host, so somehow I need to provide it with
the location of the headers _on host_, which is
different with what will it be later on the target.

> You didn't show how. What exact commands did you run?
If only that would be so easy to tell with the script I have. :)

> I'm not interested in the libtool command that GCC runs, I want to know what
> your script does. Not what it runs basically, but precisely. The configure
> command, and the make commands.
OK, still "basically", but hopefully more informative than before:
---
export DESTDIR=/path/to/somedir
cd builddir

../gnu/gcc-9.2.0/configure --disable-plugin --enable-lto --disable-libssp --
disable-nls --enable-libquadmath-support --enable-version-specific-runtime-libs
--enable-fat --enable-libstdcxx-filesystem-ts --target=i586-pc-msdosdjgpp
--pref
ix=/usr/local/cross --enable-languages=c,c++

make all-gcc
make install-gcc
make [ fails here ]
make install-strip
---

Some things are still stripped, as the script also
sets pathes after make install-gcc and does some other
things. If this is still not enough, I'll work on a
minimal reproducer.

> So if you need it defined before that step, you're doing something wrong. We
> can't tell what you're doing wrong, because you haven't said what you're
> doing.

I am trying to get this build script to work:
https://github.com/stsp/build-gcc/tree/master
to build djgpp tool-chain w/o root privs.
The current strategy of that script is to install
built gcc on host (needs root), then build gcc libs
from there. I want to confine it to the DESTDIR,
including the headers installation, so the way I
run that script (I don't expect you are going to try
it yourself of course):
---
QUIET=1 DESTDIR=`pwd`/ttt ./build-djgpp.sh gcc-9.2.0
---

[Bug other/91879] DESTDIR support seems incomplete

2019-09-24 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #4 from Stas Sergeev  ---
(In reply to Harald van Dijk from comment #1)
> The ways to handle libc being installed in non-standard locations depend on
> your specific use case. GCC provides the --with-sysroot and
> --with-native-system-header-dir configure options,

These ones specify the locations permanently.
My problem is that I need a different sysroot/system-header-dir
only for the time of building the gcc libs.
This is when DESTDIR is set. When the package
is installed on the target, then DESTDIR is
unset and the prefix locations should be used.
So I think the options you pointed, do not help
in my case. Somehow I need the build system to
pick up DESTDIR while building its libs.

If I could pass the variable name to --with-native-system-header-dir,
like -with-native-system-header-dir=\$DESTDIR/somedir
(dollar is "escaped" in that example to not expand immediately)
then that would work, but I don't suppose passing
the variable name is possible?

[Bug other/91879] DESTDIR support seems incomplete

2019-09-24 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

--- Comment #3 from Stas Sergeev  ---
(In reply to Harald van Dijk from comment #1)
> archive from the DESTDIR directory and extracting it elsewhere. It is not
> supposed to be used at configure time to pick up other software, only at
> install time to determine the location to install into.

Yes, I understand that.
And yet what I see is that when gcc is building its libs,
it looks for the system headers under prefix, not under destdir.
But you are right, I probably should try mail list for
the help with that.

(In reply to Jonathan Wakely from comment #2)
> (In reply to Stas Sergeev from comment #0)
> > Hello.
> > 
> > I tried to build gcc with non-empty DESTDIR.
> 
> What exact commands did you run?

I have the script that runs basically "make all-gcc" and
"make" after setting some env vars. And what fails is the
plain "make" step. I can show you how exactly:
---
libtool: compile: 
/home/stas/src/build-gcc/build/djcross-gcc-9.2.0/djcross/./gcc/xgcc
-B/home/stas/src/build-gcc/build/djcross-gcc-9.2.0/djcross/./gcc/
-B/usr/local/cross/i586-pc-msdosdjgpp/bin/
-B/usr/local/cross/i586-pc-msdosdjgpp/lib/ -isystem
/usr/local/cross/i586-pc-msdosdjgpp/include -isystem
/usr/local/cross/i586-pc-msdosdjgpp/sys-include -DHAVE_CONFIG_H -I.
-I../../../gnu/gcc-9.2.0/libquadmath -I
../../../gnu/gcc-9.2.0/libquadmath/../include -g -O2 -MT math/x2y2m1q.lo -MD
-MP -MF math/.deps/x2y2m1q.Tpo -c
../../../gnu/gcc-9.2.0/libquadmath/math/x2y2m1q.c -o math/x2y2m1q.o
In file included from ../../../gnu/gcc-9.2.0/libquadmath/math/x2y2m1q.c:19:
../../../gnu/gcc-9.2.0/libquadmath/quadmath-imp.h:24:10: fatal error: errno.h:
No such file or directory
   24 | #include 
  |  ^
compilation terminated.
---

> I don't see why DESTDIR should matter until the 'make install' step.

Please note the
-isystem /usr/local/cross/i586-pc-msdosdjgpp/include -isystem
/usr/local/cross/i586-pc-msdosdjgpp/sys-include

above. Clearly it misses the DESTDIR, and feeding the
DESTDIR there makes the problem to go away.

[Bug other/91879] New: DESTDIR support seems incomplete

2019-09-24 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91879

Bug ID: 91879
   Summary: DESTDIR support seems incomplete
   Product: gcc
   Version: 9.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: other
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

Hello.

I tried to build gcc with non-empty DESTDIR.
It fails on libquadmath:

In file included from ../../../gnu/gcc-9.2.0/libquadmath/math/x2y2m1q.c:19:
../../../gnu/gcc-9.2.0/libquadmath/quadmath-imp.h:24:10: fatal error: errno.h:
No such file or directory
   24 | #include 
  |  ^

The problem is that the system headers are searched in a
prefix path that doesn't account for DESTDIR. This is because
of the explicit -isystem in the command line. I looked at a
build system and found that -isystem is built from the "tooldir"
variable of the configure script. So I made the following change
to confirm my findings:

--- configure.ac.old2019-09-24 03:44:28.141779422 +0300
+++ configure.ac2019-09-24 03:30:59.022308759 +0300
@@ -2572,7 +2572,7 @@

 # Some systems (e.g., one of the i386-aix systems the gas testers are
 # using) don't handle "\$" correctly, so don't use it here.
-tooldir='${exec_prefix}'/${target_noncanonical}
+tooldir='${DESTDIR}${exec_prefix}'/${target_noncanonical}
 build_tooldir=${tooldir}

 # Create a .gdbinit file which runs the one in srcdir


And with that change the build worked.
So I suppose this is a build system bug.

[Bug c++/89331] [8/9 Regression] internal compiler error: in build_simple_base_path, at cp/class.c:589

2019-04-03 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89331

--- Comment #7 from Stas Sergeev  ---
(In reply to Jason Merrill from comment #4)
> But when we're in the middle of the class definition we don't know yet
> whether it's standard-layout, so we can't answer yet.  A compiler is allowed
> to reorder fields of a non-standard-layout class.

Thanks, that clears some things for me.
I definitely am not going to turn this ticket
into a forum, but I am still puzzled why the
below works (on gcc at least, not on clang):
---
#include 
#include 

class L {};
template 
struct offset_of {
constexpr operator size_t() const {
return (std::uintptr_t)&(((T*)nullptr)->*M);
}
};
template 
struct B {
char aa;
static const int off = offset_of();
};

struct A {
char a;
L _mark[0];
B b;
};

int main()
{
A a;
std::cout << "size " << sizeof(A) << " off " << a.b.off << std::endl;
return 0;
}
---

Here I do 2 emulation tricks.
I use address-of on the zero-sized mark to emulate
offsetof() in the not yet fully defined class.
And I use reinterpret cast in a constexpr to emulate
offsetof() that doesn't want to work with the template
arguments for some reason.
This works perfectly on gcc (I filled a bug report to clang).
So if the emulation works, why doesn't the original?
Are there any possibility to somehow extend __builtin_offsetof()
to cover either of those 2 cases where I currently have
to emulate it? While I understand the problem you described,
why does the above example avoids it?

[Bug inline-asm/89334] unsupported size for integer register

2019-02-13 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89334

Stas Sergeev  changed:

   What|Removed |Added

 Status|RESOLVED|UNCONFIRMED
 Resolution|INVALID |---

--- Comment #6 from Stas Sergeev  ---
Thanks Andrew!
Please, make gcc better, not worse.
Prev versions generated "%sil", which,
while silly, was at least traceable.
And now the things are completely cryptic.

[Bug inline-asm/89334] unsupported size for integer register

2019-02-13 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89334

--- Comment #4 from Stas Sergeev  ---
Would it be possible to at least show the
correct line number where the register allocation
actually failed? gcc points to a rather "random"
line, and it required many hours of an engineer
work to find the problematic spot in a large project.
It really is not the good handling of this problem.

And I don't understand why it is impossible to add
error or warning if gcc emits 8bit reference for "r"
and knows it is not supposed to work.

[Bug inline-asm/89334] unsupported size for integer register

2019-02-13 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89334

--- Comment #2 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #1)
> the same for -m64, but only al/bl/cl/dl for -m32, because there is no
> sil/dil/bpl for -m32.

But why does this matter?
I am perfectly fine with al/bl/cl/dl, never asked
to use sil/dil/bpl. What is the rationale? If "r"
is plain invalid for 8bit values, then shouldn't
the error be different and not to depend on an opt
level? Could you please explain a bit more what
exactly the error is and why it works with -O1?
Why invalid registers (sil/dil/bpl) even matter
at all?

[Bug inline-asm/89334] New: unsupported size for integer register

2019-02-13 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89334

Bug ID: 89334
   Summary: unsupported size for integer register
   Product: gcc
   Version: 8.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: inline-asm
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

Created attachment 45700
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=45700=edit
test case

The following seemingly valid test-case can
be compiled with clang, but fails with gcc with -O2:

$ gcc -Wall -m32 -O2 -S -o foo.s foo.c 
foo.c: In function ‘do_work’:
foo.c:60:1: error: unsupported size for integer register

[Bug c++/89331] [8/9 Regression] internal compiler error: in build_simple_base_path, at cp/class.c:589

2019-02-13 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89331

--- Comment #2 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #1)
> Simplified testcase:
> struct A { char a; };
> struct B : public A { static constexpr int b = __builtin_offsetof (B, a); };
> 
> clang rejects this too, not really sure if it is valid or not.

Thanks for taking a look!
A slight off-topic: any idea why even this rejects:
struct A {
char a;
static constexpr int b = __builtin_offsetof (A, a);
};

and is there any work-around when I want to
pass offsetof value into a template non-type,
which also rejects:
struct A {
char a;
B<__builtin_offsetof(A, a)> b;
};

Does the standard explicitly forbids that, of just gcc?

[Bug c++/89331] New: internal compiler error: in build_simple_base_path, at cp/class.c:589

2019-02-13 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89331

Bug ID: 89331
   Summary: internal compiler error: in build_simple_base_path, at
cp/class.c:589
   Product: gcc
   Version: 8.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

#include 

class A {
public:
char a;
};

class B : public A {
public:
static constexpr size_t b = offsetof(B, a);
};


$ c++ -Wall -c overl.cpp 
In file included from /usr/include/c++/8/cstddef:50,
 from overl.cpp:1:
overl.cpp:10:41: internal compiler error: in build_simple_base_path, at
cp/class.c:589
 static constexpr size_t b = offsetof(B, a);
 ^
Please submit a full bug report,
with preprocessed source if appropriate.
See  for instructions.

[Bug sanitizer/87884] ubsan causes wrong -Wformat-overflow warning

2018-11-06 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87884

--- Comment #2 from Stas Sergeev  ---
(In reply to Martin Liška from comment #1)
> In general we have issues with warnings when sanitizers are used.
More than that.
You also have a compile-time errors now!
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87857
Which is completely unacceptable IMHO.

> Martin: What about notifying users that one should not combine sanitizers
> and warnings? It's becoming a very common issue.
Could you please clarify?
Do you mean, -Wall should not have been used
together with -fsanitize?
Also I wonder how do you mean to notify users,
and why not to fix the code instead.

[Bug sanitizer/87884] New: ubsan causes wrong -Wformat-overflow warning

2018-11-05 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87884

Bug ID: 87884
   Summary: ubsan causes wrong -Wformat-overflow warning
   Product: gcc
   Version: 8.2.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: sanitizer
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
CC: dodji at gcc dot gnu.org, dvyukov at gcc dot gnu.org,
jakub at gcc dot gnu.org, kcc at gcc dot gnu.org, marxin at 
gcc dot gnu.org
  Target Milestone: ---

Created attachment 44959
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=44959=edit
test case

Hello.

Attached is the reduced test-case.
It gives:
---
gcc -c -Wall -fsanitize=undefined -O2 mangle.c -I.
mangle.c: In function 'name_convert':
mangle.c:57:3: warning: null destination pointer [-Wformat-overflow=]
   sprintf(s,"%s","test");
---

Which is wrong.
Plus, I am not sure if "-Wformat-overflow=" is a
correct switch for this type of warning.

[Bug sanitizer/87857] case label does not reduce to an integer constant

2018-11-02 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87857

--- Comment #5 from Stas Sergeev  ---
(In reply to Jakub Jelinek from comment #4)
> The reason you get an error is that the expression isn't constant, because
> it needs to emit the runtime diagnostics.  Just fix the bug and get away
> with that?  1U<<31 will do.

I of course already "fixed" my code as per
earlier comments here. So you can close this
if you want. But I am sure gcc is not doing
the right thing here. Just make it a warning,
and, more importantly, -W warning, independent
of any -f. Then people will get this warning
with -Wall or whatever, and will not get a
compilation failure with -fsanitize on otherwise
warning-less code.

I am not sure I understand how the run-time
diagnostic makes the expression non-const.

[Bug c/87857] case label does not reduce to an integer constant

2018-11-01 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87857

--- Comment #3 from Stas Sergeev  ---
So a clang bug?
I wonder if ubsan is supposed to produce the
compile-time errors, rather than the run-time
warnings. Would it be possible to downgrade
this to a compile-time warning, and/or add a
switch to disable it?
IMHO its absolutely unexpected to get the
compilation failure just because of ubsan.

[Bug c/87857] New: case label does not reduce to an integer constant

2018-11-01 Thread stsp at users dot sourceforge.net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87857

Bug ID: 87857
   Summary: case label does not reduce to an integer constant
   Product: gcc
   Version: 7.3.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: stsp at users dot sourceforge.net
  Target Milestone: ---

Hello.

The following example:
---
#include 

int foo(uint64_t a)
{
switch (a) {
case (1 << 31):
  return 1;
}
return 0;
}

int main(int argc, char *argv[])
{
return foo(argc);
}
---

doesn't compile with -fsanitize=undefined:
---
$ gcc -Wall -fsanitize=undefined lswitch.c 
lswitch.c: In function ‘foo’:
lswitch.c:6:5: error: case label does not reduce to an integer constant
 case (1 << 31):
---

But if you use g++ or clang with the same
switches, then it compiles fine.

  1   2   >