Re: [PATCH 6/4] libbacktrace: Add loaded dlls after initialize

2024-07-29 Thread Eli Zaretskii via Gcc
> From: Ian Lance Taylor 
> Date: Mon, 29 Jul 2024 09:46:46 -0700
> Cc: Eli Zaretskii , gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> 
> On Fri, Mar 15, 2024 at 1:41 PM Björn Schäpers  wrote:
> >
> > Am 10.01.2024 um 13:34 schrieb Eli Zaretskii:
> > >> Date: Tue, 9 Jan 2024 21:02:44 +0100
> > >> Cc: i...@google.com, gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> > >> From: Björn Schäpers 
> > >>
> > >> Am 07.01.2024 um 18:03 schrieb Eli Zaretskii:
> > >>> In that case, you an call either GetModuleHandeExA or
> > >>> GetModuleHandeExW, the difference is minor.
> > >>
> > >> Here an updated version without relying on TEXT or TCHAR, directly 
> > >> calling
> > >> GetModuleHandleExW.
> > >
> > > Thanks, this LGTM (but I couldn't test it, I just looked at the
> > > sour ce code).
> >
> > Here an updated version. It is rebased on the combined approach of getting 
> > the
> > loaded DLLs and two minor changes to suppress warnings.
> 
> This bug report was filed about this patch:
> 
> https://github.com/ianlancetaylor/libbacktrace/issues/131
> 
> > src\pecoff.c(86): error C2059: syntax error: '('
> > src\pecoff.c(89): error C2059: syntax error: '('
> >
> > It works fine if deleting CALLBACK and NTAPI.
> 
> Any ideas?

Instead of deleting those, move them inside the parentheses:

typedef VOID (CALLBACK *LDR_DLL_NOTIFICATION)(ULONG,
  struct dll_notification_data*,
  PVOID);
typedef NTSTATUS (NTAPI *LDR_REGISTER_FUNCTION)(ULONG,
LDR_DLL_NOTIFICATION, PVOID,
PVOID*);

and also I think you need to include , for the definition
of the NTSTATUS type.

Caveat: I don't have MSVC, so I couldn't verify that these measures
fix the problem, sorry.


Re: [PATCH 6/4] libbacktrace: Add loaded dlls after initialize

2024-01-10 Thread Eli Zaretskii via Gcc
> Date: Tue, 9 Jan 2024 21:02:44 +0100
> Cc: i...@google.com, gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> From: Björn Schäpers 
> 
> Am 07.01.2024 um 18:03 schrieb Eli Zaretskii:
> > In that case, you an call either GetModuleHandeExA or
> > GetModuleHandeExW, the difference is minor.
> 
> Here an updated version without relying on TEXT or TCHAR, directly calling 
> GetModuleHandleExW.

Thanks, this LGTM (but I couldn't test it, I just looked at the
sour ce code).


Re: [PATCH 6/4] libbacktrace: Add loaded dlls after initialize

2024-01-07 Thread Eli Zaretskii via Gcc
> Date: Sun, 7 Jan 2024 17:07:06 +0100
> Cc: i...@google.com, gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> From: Björn Schäpers 
> 
> > That was about GetModuleHandle, not about GetModuleHandleEx.  For the
> > latter, all Windows versions that support it also support "wide" APIs.
> > So my suggestion is to use GetModuleHandleExW here.  However, you will
> > need to make sure that notification_data->dll_base is declared as
> > 'wchar_t *', not 'char *'.  If dll_base is declared as 'char *', then
> > only GetModuleHandleExA will work, and you will lose the ability to
> > support file names with non-ASCII characters outside of the current
> > system codepage.
> 
> The dll_base is a PVOID. With the GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS flag 
> GetModuleHandleEx does not look for a name, but uses an adress in the module 
> to 
> get the HMODULE, so you cast it to char* or wchar_t* depending on which 
> function 
> you call. Actually one could just cast the dll_base to HMODULE, at least in 
> win32 on x86 the HMODULE of a dll is always its base adress. But to make it 
> safer and future proof I went the way through GetModuleHandeEx.

In that case, you an call either GetModuleHandeExA or
GetModuleHandeExW, the difference is minor.


Re: [PATCH 6/4] libbacktrace: Add loaded dlls after initialize

2024-01-07 Thread Eli Zaretskii via Gcc
[I re-added the other addressees, as I don' think you meant to make
this discussion private between the two of us.]

> Date: Sun, 7 Jan 2024 12:58:29 +0100
> From: Björn Schäpers 
> 
> Am 07.01.2024 um 07:50 schrieb Eli Zaretskii:
> >> Date: Sat, 6 Jan 2024 23:15:24 +0100
> >> From: Björn Schäpers 
> >> Cc: gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> >>
> >> This patch adds libraries which are loaded after backtrace_initialize, like
> >> plugins or similar.
> >>
> >> I don't know what style is preferred for the Win32 typedefs, should the 
> >> code use
> >> PVOID or void*?
> > 
> > It doesn't matter, at least not if the source file includes the
> > Windows header files (where PVOID is defined).
> > 
> >> +  if (reason != /*LDR_DLL_NOTIFICATION_REASON_LOADED*/1)
> > 
> > IMO, it would be better to supply a #define if undefined:
> > 
> > #ifndef LDR_DLL_NOTIFICATION_REASON_LOADED
> > # define LDR_DLL_NOTIFICATION_REASON_LOADED 1
> > #endif
> > 
> 
> I surely can define it. But the ifndef is not needed, since there are no 
> headers 
> containing the function signatures, structures or the defines:
> https://learn.microsoft.com/en-us/windows/win32/devnotes/ldrregisterdllnotification

OK, I wasn't sure about that.

> >> +  if (!GetModuleHandleEx (GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS
> >> +| GET_MODULE_HANDLE_EX_FLAG_UNCHANGED_REFCOUNT,
> >> +(TCHAR*) notification_data->dll_base,
> > 
> > Is TCHAR correct here? does libbacktrace indeed use TCHAR and relies
> > on compile-time definition of UNICODE?  (I'm not familiar with the
> > internals of libbacktrace, so apologies if this is a silly question.)
> > 
> > Thanks.
> 
> As far as I can see it's the first time for TCHAR, I would've gone for 
> GetModuleHandleExW, but 
> https://gcc.gnu.org/pipermail/gcc/2023-January/240534.html

That was about GetModuleHandle, not about GetModuleHandleEx.  For the
latter, all Windows versions that support it also support "wide" APIs.
So my suggestion is to use GetModuleHandleExW here.  However, you will
need to make sure that notification_data->dll_base is declared as
'wchar_t *', not 'char *'.  If dll_base is declared as 'char *', then
only GetModuleHandleExA will work, and you will lose the ability to
support file names with non-ASCII characters outside of the current
system codepage.

> But I didn't want to force GetModuleHandleExA, so I went for TCHAR and 
> GetModuleHandleEx so it automatically chooses which to use. Same for 
> GetModuleHandle of ntdll.dll.

The considerations for GetModuleHandle and for GetModuleHandleEx are
different: the former is also available on old versions of Windows
that doesn't support "wide" APIs.


Re: [PATCH 6/4] libbacktrace: Add loaded dlls after initialize

2024-01-06 Thread Eli Zaretskii via Gcc
> Date: Sat, 6 Jan 2024 23:15:24 +0100
> From: Björn Schäpers 
> Cc: gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> 
> This patch adds libraries which are loaded after backtrace_initialize, like 
> plugins or similar.
> 
> I don't know what style is preferred for the Win32 typedefs, should the code 
> use 
> PVOID or void*?

It doesn't matter, at least not if the source file includes the
Windows header files (where PVOID is defined).

> +  if (reason != /*LDR_DLL_NOTIFICATION_REASON_LOADED*/1)

IMO, it would be better to supply a #define if undefined:

#ifndef LDR_DLL_NOTIFICATION_REASON_LOADED
# define LDR_DLL_NOTIFICATION_REASON_LOADED 1
#endif

> +  if (!GetModuleHandleEx (GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS
> +   | GET_MODULE_HANDLE_EX_FLAG_UNCHANGED_REFCOUNT,
> +   (TCHAR*) notification_data->dll_base,

Is TCHAR correct here? does libbacktrace indeed use TCHAR and relies
on compile-time definition of UNICODE?  (I'm not familiar with the
internals of libbacktrace, so apologies if this is a silly question.)

Thanks.


Re: libgcov, fork, and mingw (and other targets without the full POSIX set)

2023-12-01 Thread Eli Zaretskii via Gcc
> Cc: Jonathan Yong <10wa...@gmail.com>, Jan Hubicka , Nathan
>  Sidwell 
> Date: Fri, 01 Dec 2023 09:02:55 +0100
> X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH,
>  DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, RCVD_IN_DNSWL_NONE,
>  RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP,
>  T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6
> From: Florian Weimer via Gcc 
> 
> I've received a report of a mingw build failure:
> 
> ../../../gcc/libgcc/libgcov-interface.c: In function '__gcov_fork':
> ../../../gcc/libgcc/libgcov-interface.c:185:9: error: implicit declaration of 
> function 'fork' [-Wimplicit-function-declaration]
>   185 |   pid = fork ();
>   | ^~~~
> make[2]: *** [Makefile:932: _gcov_fork.o] Error 1
> make[2]: *** Waiting for unfinished jobs
> 
> As far as I understand it, mingw doesn't have fork and doesn't declare
> it in , so it's not clear to me how this has ever worked.  I
> would expect a linker failure.  Maybe that doesn't happen because the
> object containing a reference to fork is only ever pulled in if the
> application calls the intercepted fork, which doesn't happen on mingw.
> 
> What's the best way to fix this?  I expect it's going to impact other
> targets (perhaps for different functions) because all of
> libgcov-interface.c is built unconditionally.  I don't think we run
> configure for the target, so we can't simply check for a definition of
> the HAVE_FORK macro.

I'm not familiar with this code, so apologies in advance if what I
suggest below makes no sense.

If the code which calls 'fork' is never expected to be called in the
MinGW build, then one way of handling this is to define a version of
'fork' that always fails, conditioned by a suitable #ifdef, so that
its declaration and definition are visible when this file is compiled.


Re: [PATCH 4/4] libbacktrace: get debug information for loaded dlls

2023-11-30 Thread Eli Zaretskii via Gcc
> Date: Thu, 30 Nov 2023 11:53:54 -0800
> Cc: gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> From: Ian Lance Taylor via Gcc 
> 
> Also starting with a module count of 1000 seems like a lot.  Do
> typical Windows programs load that many modules?

Unlikely.  I'd start with 100.


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-11-20 Thread Eli Zaretskii via Gcc
> Date: Mon, 20 Nov 2023 20:57:38 +0100
> Cc: gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> From: Björn Schäpers 
> 
> +#ifndef NOMINMAX
> +#define NOMINMAX
> +#endif

Why is this part needed?

Otherwise, LGTM, thanks.  (But I'm don't have the approval rights, so
please wait for Ian to chime in.)


Re: RFC: Top level configure: Require a minimum version 6.8 texinfo

2023-08-29 Thread Eli Zaretskii via Gcc-patches
> Date: Tue, 29 Aug 2023 17:45:20 +0200
> Cc: gcc-patches@gcc.gnu.org, gdb-patc...@sourceware.org,
>  binut...@sourceware.org
> From: Jakub Jelinek via Gdb-patches 
> 
> On Tue, Aug 29, 2023 at 04:21:44PM +0100, Nick Clifton via Gcc-patches wrote:
> >   Currently the top level configure.ac file sets the minimum required
> >   version of texinfo to be 4.7.  I would like to propose changing this
> >   to 6.8.
> >   
> >   The reason for the change is that the bfd documentation now needs at
> >   least version 6.8 in order to build[1][2].  Given that 4.7 is now
> >   almost 20 years old (it was released in April 2004), updating the
> >   requirement to a newer version does seem reasonable.  On the other
> >   hand 6.8 is quite new (it was released in March 2021), so a lot of
> >   systems out there may not have it.
> > 
> >   Thoughts ?
> 
> I think that is too new.

It _is_ new.  But I also don't understand why Nick thinks he needs
Texinfo 6.8.  AFAIR, makeinfo supported @node lines without explicit
pointers since at least version 4.8.  I have on my disk the manual
produced for Emacs 22.1, where the Texinfo sources have no pointers,
e.g.:

  @node Abbrev Concepts

and the corresponding Info file says:

  This is ../info/emacs, produced by makeinfo version 4.8 from emacs.texi.

So I'm not sure what exactly is the feature that requires Texinfo 6.8.
What am I missing?


LSP based on GCC

2023-05-17 Thread Eli Zaretskii via Gcc
Dear GCC developers,

Emacs 29, to be released soon, will come with a built-in client for
the LSP protocol.  This allows to enhance important Emacs features,
such as at-point documentation, on-the-fly diagnostic annotations,
finding definitions and uses of program identifiers, enhanced
completion of symbols and code, etc., based on capabilities of LSP
servers.

The Emacs LSP client comes with support for many popular LSP servers
OOTB and for all the programming languages supported by Emacs.
However, all the available servers for C and C++ languages are based
on Clang.  AFAIU, this is because GCC does not yet have its own
implementation of the LSP.  I found this message posted to gcc-patches
in 2017:

  https://gcc.gnu.org/legacy-ml/gcc-patches/2017-07/msg01448.html

which described the initial implementation of LSP in GCC, but I seem
to be unable to find out what happened with that since then.

Are there plans for implementing the LSP in GCC?  If so, which GCC
version is expected to have this included?

If there are no current plans for implementing LSP, I hope someone
will work on that soon, given that Emacs can now use it, and because
having a GCC-based LSP implementation will allow people to use their
installed GCC as the basis for LSP features, instead of having to
install yet another compiler.

TIA


Re: More C type errors by default for GCC 14

2023-05-12 Thread Eli Zaretskii via Gcc
> Date: Fri, 12 May 2023 10:15:45 +0200
> From: Jakub Jelinek 
> Cc: Arsen Arsenović , luang...@yahoo.com,
> jwakely@gmail.com, gcc@gcc.gnu.org
> 
> On Fri, May 12, 2023 at 10:53:28AM +0300, Eli Zaretskii via Gcc wrote:
> > 
> > Let's keep in mind that veterans are much more likely to have to deal
> > with very large programs than newbies, and so dealing with breakage
> > for them is much harder than doing that in a toy program.  Thus, the
> > proposal does "pressure the old very much".
> 
> Pressure for something they should have done decades ago if the code was
> really maintained.

Why not assume that at least some of them didn't because they had good
reasons?

> Anyway, I don't understand why these 3 (implicit fn declarations,
> implicit ints and int-conversions) are so different from anything that
> one needs to change in codebases every year as documented in
> gcc.gnu.org/gcc-NN/porting_to.html .  It is true that for C++ there are
> more such changes than for C, but say GCC 12 no longer accepts
> computed gotos with non-pointer types, GCC 10 changed default from
> -fcommon to -fno-common for C which also affects dusty codebases
> significantly, GCC 9 changed the lifetime of block scope compound literals
> (again, affected various old codebases), GCC 5 broke bad user expectations
> regarding preprocessor behavior by adding extra line markers to represent
> whether certain tokens come from system headers or not, etc.
> And of course compiler optimizations added every year can turn previously
> "working" code with undefined behaviors in it into code not working as user
> expected.  E.g. compared to the above 3 that are easily fixed, it is obvious
> what the problem is, tracking undefined behavior in code even when one
> has sanitizers etc. is much more time consuming.

The difference, IMO, is that in all the cases you describe the changes
were done because they were necessary for GCC to support some new
feature, or fix a bug.  By contrast, in this case there's no new
features that require to fail to compile the code in question.

> Can we stop this thread.  I'm afraid everything has been said multiple
> times, it is up to the GCC Steering Committee to decide this if there is
> disagreement on it among GCC developers, but my current understanding is
> that that is not the case here and that the active GCC developers agree on
> it.

Fine, I will stop posting at this point.  Thanks for listening.


Re: More C type errors by default for GCC 14

2023-05-12 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Fri, 12 May 2023 08:28:00 +0100
> Cc: Eli Schwartz , Po Lu , 
>   "gcc@gcc.gnu.org" 
> 
>  It is on topic because there doesn't seem to be anything in the
>  arguments brought up for this current proposal that couldn't be
>  brought up in favor of removing -fpermissive.  There are no guiding
>  principles being uttered which allow the current proposal, but will
>  disallow the removal of -fpermissive.  
> 
> "Let's change a default and add an option to get the old default" is really 
> not the disaster you're making
> it out to be. You're becoming a laughing stock at this point.

I'm sad to hear that you consider this laughable.  I hope the
development team as a whole and the steering committee will consider
that more seriously.

>  The same "let's be more popular
>  and forthcoming to newbies, and more like Clang" PR-style stuff can
>  justify both.
> 
> It's not about popularity. If that's your takeaway then you're not paying 
> attention, whatever you claim
> about reading everything in the thread. It's about helping people write 
> correct code, first time, without
> some of the avoidable traps that C presents.

GCC already helps those people who want to be helped.  This was
pointed out several times.

> The C ecosystem has a shockingly bad reputation when it comes to security and 
> "just don't write
> bugs" is naive and ineffective. Maybe you're good enough for that to work, 
> but then you should also be
> able to cope with a change in defaults.
> 
> It's time for some defaults to change so that modern C is preferred, and 
> "implicit everything, hope the
> programmer got it right" requires explicit action, *but it's still possible 
> to do* for the 1970s nostalgia
> fans.

That's just a bunch of slogans.  Decisions about backward-incompatible
changes should do better than heed slogans.

>  > We might as well assume that the GCC developers are honest and truthful
>  > people, otherwise it is *definitely* a waste of time asking them about
>  > this change in the first place.
> 
>  This is not about honesty.  No one is questioning the honesty of GCC
>  developers.  What is being questioned are the overriding principles
>  that should be applied when backward-incompatible changes are
>  proposed.  Are there such principles in GCC development, and if there
>  are, where are they documented?  Or are such discussions just some
>  ad-hoc disputes, and the results are determined by which party is at
>  that time more vocal?
> 
> GCC has always taken backwards compatibility seriously. That doesn't mean it 
> is the prime directive
> and can never be violated, but it's absolutely always considered.

Considered and dismissed, it seems, at least judging by your
responses.

As for when it can be violated, I already explained my opinions.
TL;DR: there are valid cases, but not in this case.

> In this case, changing the default
> seems appropriate to many people, including those who actually maintain gcc 
> and deal with the
> consequences of the current defaults.

These decisions should not be based on majority votes.  Breaking even
one person's code is much worse than helping many others realize more
clearly their code needs to be fixed.

> Do you have anything new to add other than repeating the same arguments? 
> We've heard them now,
> thanks.

Oh, please drop the attitude.  You are not making your arguments more
convincing by being hostile and ad-hominem.  We are supposed to have
the same goals.


Re: More C type errors by default for GCC 14

2023-05-12 Thread Eli Zaretskii via Gcc
> From: Arsen Arsenović 
> Cc: luang...@yahoo.com, jwakely@gmail.com, gcc@gcc.gnu.org
> Date: Thu, 11 May 2023 21:25:53 +0200
> 
> >> This seems like a good route to me - it facilitates both veterans
> >> maintaining code and beginners just learning how to write C.
> >
> > No, it prefers beginners (which already have the warnings, unless they
> > deliberately turn them off) to veterans who know what they are doing,
> > and can live with those warnings.
> 
> Indeed.  I said facilitates, not treats equally.  I think the veterans
> here won't lose much by having to pass -fpermissive, and I think that's
> a worthwhile sacrifice to make, to nurture the new without pressuring
> the old very much.

Let's keep in mind that veterans are much more likely to have to deal
with very large programs than newbies, and so dealing with breakage
for them is much harder than doing that in a toy program.  Thus, the
proposal does "pressure the old very much".

> > The right balance is exactly what we have now: emitting warnings
> > without breaking builds.
> 
> I disagree - I think breaking builds here (remember, it takes 13 bytes
> to fix them) is a much lower weight than the other case being shot in
> the foot for an easily detectable and treatable error being made easily
> missable instead, so I reckon the scale is tipped heavily towards the
> veterans.

I described in an earlier message how this breakage looks in real
life, and why it causes a lot of frustration.  The main problem is
discovering that things broke because GCC defaults, and then
discovering how to pacify GCC with the least effort.  You are talking
only about what follows these two discovery processes, and that misses
the main disadvantage of this proposal (an in fact almost any breaking
proposal).

> On that note - lets presume a beginners role.  I've just started using
> GCC.  I run 'gcc -O2 -Wall main.c fun.c' and I get an a.out.  It
> mentions some 'implicit function generation', dunno what that means - if
> it mattered much, it'd have been an error.  I wrote a function called
> test that prints the int it got in hex, but I called it with 12.3, but
> it printed 1.. what the heck?
> 
> Why that happened is obvious to you and I (if you're on the same CPU as
> me), but to a beginner is utter nonsense.
> 
> At this point, I can only assume one goes to revisit that warning..  I'd
> hope so at least.

That's perfectly okay: the beginner made a mistake of ignoring a
warning, and now he or she will need to pay for that mistake.

But why should someone else pay, and pay dearly, for the mistakes of
such newbies?  That is simply unfair.  The payment should be on the
one who made the mistake.

> I doubt the beginner would know to pass
> -Werror=implicit-function-declaration in this case (or even about
> Werror...  I just told them what -Wall and to read the warnings, which
> was gleefully ignored)

They don't need to.  They just need to fix their bad code, that's all.

> Is it that much of a stretch to imagine that a maintainer of a codebase
> that has not seen revisions to get it past K practices would
> know that they need to pass -std=c89 (or a variant of such), or even
> -fpermissive - assuming that they could even spare to use GCC 14 as
> opposed to 2.95?

If the program built okay till now, they might not know.

> As an anecdote, just recently I had to fix some code written for i686
> CPUs, presumably for GCC 4.something or less, because the GCC I insist
> on using (which is 13 and has been since 13.0 went into feature-freeze)
> has started using more than the GPRs on that machine (which lead to hard
> to debug crashes because said codebase does not enable the requisite CPU
> extensions, or handle the requisite registers properly).  I think this
> fits within the definition of 'worked yesterday, broke today'.

But it broke for a valid technical reasons: GCC was improved by
supporting more registers, and thus it now emits better code.  This
kind of reason is perfectly legitimate for breaking some old and
borderline-invalid programs, especially if GCC was emitting warnings
for those programs in past releases.

But the change discussed here is not like that.

> With all that to consider, is it *really* a significant cost to add
> -fpermissive?

See above (and my earlier message): the significant cost is to
discover the root cause of the problem, and that -fpermissive is the
solution.  The rest might be relatively easier, at least in some
projects.

> I expect no change in behavior from those that maintain these old
> codebases, they know what they're doing, and they have bigger fish to
> fry - however, I expect that this change will result in:
> 
> - A better reputation for GCC and the GCC project (by showing that we do
>   care for code correctness),
> - More new code being less error prone (by merit of simple errors being
>   detected more often),
> - Less 'cult knowledge' in the garden path,
> - More responsible beginners, and
> - Fewer people being able to 

Re: More C type errors by default for GCC 14

2023-05-12 Thread Eli Zaretskii via Gcc
> Date: Thu, 11 May 2023 23:07:55 -0400
> Cc: gcc@gcc.gnu.org
> From: Eli Schwartz via Gcc 
> 
> > Being sceptical about the future is perfectly reasonable.
> 
> My opinion on this is (still) that if your argument is that you don't
> want -fpermissive or -std=c89 to be removed, you are more than welcome
> to be skeptical about that (either one or both), but I don't see why
> that is on topic for the question of whether things should be moved to
> flags such as those while they do exist.

It is on topic because there doesn't seem to be anything in the
arguments brought up for this current proposal that couldn't be
brought up in favor of removing -fpermissive.  There are no guiding
principles being uttered which allow the current proposal, but will
disallow the removal of -fpermissive.  The same "let's be more popular
and forthcoming to newbies, and more like Clang" PR-style stuff can
justify both.

> We might as well assume that the GCC developers are honest and truthful
> people, otherwise it is *definitely* a waste of time asking them about
> this change in the first place.

This is not about honesty.  No one is questioning the honesty of GCC
developers.  What is being questioned are the overriding principles
that should be applied when backward-incompatible changes are
proposed.  Are there such principles in GCC development, and if there
are, where are they documented?  Or are such discussions just some
ad-hoc disputes, and the results are determined by which party is at
that time more vocal?


Re: More C type errors by default for GCC 14

2023-05-12 Thread Eli Zaretskii via Gcc
> From: Jason Merrill 
> Date: Thu, 11 May 2023 22:55:07 -0400
> Cc: Eli Schwartz , Eli Zaretskii , 
> gcc@gcc.gnu.org
> 
> > Because now people will have to go through dozens and dozens of
> > Makefiles, configure.in, *.m4
> 
> You shouldn't have to change any of those, just configure with CC="gcc
> -fwhatever".

That doesn't always work.  Whether it works or not depends on how the
Makefile's are written, whether libtool is or isn't being used, etc.

So yes, it will work in some, perhaps many, cases, but not in all of
them.

Moreover, the main problem with such a change is discovering that the
build broke because of different GCC defaults, and that adding
"-fpermissive" is all that's needed to pacify GCC.  I already tried to
explain why this is nowhere as simple in real life as some people here
seem to assume.  The aggravation and frustration caused by that
process of discovery is the main downside of this proposal, and I hope
it will be considered very seriously when making the decision.


Re: More C type errors by default for GCC 14

2023-05-12 Thread Eli Zaretskii via Gcc
> Date: Thu, 11 May 2023 18:43:32 -0400
> Cc: luang...@yahoo.com, gcc@gcc.gnu.org
> From: Eli Schwartz 
> 
> On 5/11/23 2:24 AM, Eli Zaretskii wrote:
> 
> > Back to the subject: the guarantees I would personally like to have is
> > that the current GCC development team sees backward compatibility as
> > an important goal, and will try not to break old programs without very
> > good technical reasons.  At least in Emacs development, that is the
> > consideration that is very high on our priority list when making
> > development decisions.  It would be nice if GCC (and any other GNU
> > project, for that matter) would do the same, because being able to
> > upgrade important tools and packages without fear is something users
> > value very much.  Take it from someone who uses GCC on various
> > platforms since version 1.40.
> 
> This discussion thread is about having very good technical reasons -- as
> explained multiple times, including instances where you agreed that the
> technical reasons were good.

They are not technical, no.  Leaving the current behavior does not
technically hamper GCC and its users in any way -- GCC can still
compile the same programs, including those with modern std= values, as
it did before, and no false warnings or errors are caused when
compiling programs written in valid standard C.

The reasons are basically PR: better reputation for GCC etc.  Maybe
even fashion: Clang does that, so how come we don't?

> Furthermore, even despite those technical reasons, GCC is *still*
> committed to not breaking those old programs anyway. GCC merely wants to
> make those old programs have to be compiled in an "old-programs" mode.
> 
> Can you explain to me how you think this goal conflicts with your goal?

I already did, in previous messages, where I described what we all are
familiar with: the plight of a maintainer of a large software system
whose build suddenly breaks, and the difficulty in understanding which
part of the system's upgrade caused that.  I'd rather not repeat that:
there are already too many repetitions here that make the discussion
harder to follow.


Re: More C type errors by default for GCC 14

2023-05-12 Thread Eli Zaretskii via Gcc
> Date: Thu, 11 May 2023 18:30:20 -0400
> Cc: luang...@yahoo.com, gcc@gcc.gnu.org
> From: Eli Schwartz 
> 
> On 5/11/23 2:12 AM, Eli Zaretskii wrote:
> > 
> > He is telling you that removing support for these old features, you
> > draw users away from GCC and towards proprietary compilers.
> > 
> > One of the arguments in this thread _for_ dropping that support was
> > that by not rejecting those old programs, GCC draws some users away
> > from GCC.  He is telling you that this change will, perhaps, draw some
> > people to GCC, but will draw others away from GCC.  The difference is
> > that the former group will start using Clang, which is still free
> > software (at least some of its versions), whereas the latter group has
> > nowhere to go but to proprietary compilers.  So the FOSS community
> > will have suffered a net loss.  Something to consider, I think.
> 
> But I do not understand the comparison to -traditional. Which was
> already removed, and already resulted in, apparently, at least one group
> being so adamant on not-C that it switched to a proprietary compiler.
> Okay, understood. But at this point that group is no longer users of
> GCC... right?
> 
> So what is the moral of this story?

See above: that repeating the story of -traditional could result in
net loss for the FOSS movement.

> To avoid repeating the story of -traditional, and instead make sure
> that users of -std=c89 always have a flag they can use to indicate
> they are writing old c89 code?

No, the moral is not to introduce breaking behavior without very good
technical reasons.


Re: More C type errors by default for GCC 14

2023-05-11 Thread Eli Zaretskii via Gcc
> Cc: Jonathan Wakely , gcc@gcc.gnu.org
> Date: Thu, 11 May 2023 10:44:47 +0200
> From: Arsen Arsenović via Gcc 
> 
> the current default of accepting this code in C is harmful to those
> who are writing new code, or are learning C.

People who learn C should be advised to turn on all the warnings, and
should be educated not to ignore any warnings.  So this is a red
herring.

> This seems like a good route to me - it facilitates both veterans
> maintaining code and beginners just learning how to write C.

No, it prefers beginners (which already have the warnings, unless they
deliberately turn them off) to veterans who know what they are doing,
and can live with those warnings.  The right balance is exactly what
we have now: emitting warnings without breaking builds.


Re: More C type errors by default for GCC 14

2023-05-11 Thread Eli Zaretskii via Gcc
> From: David Brown 
> Date: Thu, 11 May 2023 08:52:03 +0200
> 
> > But we are not talking about some random code that just happened to
> > slip through cracks as a side effect of the particular implementation.
> > We are talking about code that was perfectly valid, had well-defined
> > semantics, and produced a working program.  I don't care about the
> > former; I do care about the latter.
> 
> How would you know that the code had perfectly valid, well-defined 
> semantics?

The programmer who wants to keep that code will know (or at least
he/she should).

> I've had the dubious pleasure of trying to maintain and update code 
> where the previous developer had a total disregard for things like 
> function declaration - he really did dismiss compiler complaints about 
> implicit function declarations as "only a warning".  The program worked 
> as he expected, for the most part.  But that was despite many functions 
> being defined in one part of the code with one set of parameters (number 
> and type), and called elsewhere with a different set - sometimes more 
> than one selection of parameter types in the same C file.

In this situation, where you take responsibility of a program someone
else wrote, and consider its code to be badly or unsafely written, TRT
is to modify the program to use more modern techniques that you
understand and support.  Compiler warnings and your own knowledge of
valid C will guide you in this.

My problem is not with the situation you described, it's with a case
that the "previous" maintainer remains the current maintainer, and/or
for some reason rewriting the code is either impractical or not the
best alternative for other reasons.

> If a compiler is required to continue to compile every program that a 
> previous version compiled, where the developer was satisfied that the 
> program worked as expected, then the only way to guarantee that is to 
> stop changing and improving gcc.

There should be no such requirement.  If addition of new features or
support for the evolving standards mean GCC must break old UB-style
code, that is fully justified and understandable.  What is _not_
justified, IMO, is breaking old programs without any such technical
reasons, just because we think they are better off broken.  IOW, as
long as being able to compile those old programs in the same old way
doesn't get in the way of being able to compile valid programs, the
old code should remain unbroken.

> I agree that accepting fully correct programs written in K C would not 
> limit the future development of gcc.  But how many programs written in 
> K C, big enough and important enough to be relevant today, are fully 
> correct?  I'd be surprised if you needed more than one hand to count 
> them.

For the purpose of this discussion, the "how many" question is not
interesting.  If there is an incorrect K program, the results and
consequences of this incorrectness are on the maintainers of those
programs, not on the GCC team.  If some of those incorrect programs
break because of some valid development in GCC, so be it.

But the decision to make a warning to be an error is not dictated by
GCC development and its needs to compile valid programs, it is
dictated by other, largely non-technical considerations.  I'm saying
that these considerations are not reasons good enough for breaking
those old programs.

> Continuing to give developers what they expect, rather than what the 
> standards (and gcc extensions) guarantee, is always an issue for 
> backwards compatibility.  Each new version of gcc can, and sometimes 
> does, "break" old code - code that people relied on before, but was 
> actually incorrect.  This is unavoidable if gcc is to progress.

It is indeed unavoidable, and a fact of life.  All I'm saying is that
we as developers should try to minimize these breakages as much as
possible, without hampering new developments too much.

> That is why I suggested that a flag such as "-fold-code" that enables 
> long outdated syntaxes should also disable the kind of optimisations 
> that are most likely to cause issues with old code, and should enable 
> semantics chances to match likely assumptions in such code.  I don't 
> believe in the existence of correct K C code - but I /do/ believe in 
> the importance of some K C code despite its errors.

The problem, as you well know, is that when a large software package
fails to build, it is not immediately clear what is the reason for
that.  We here discuss a single such reason, but in reality no one
tells you that the build fails because the new version of GCC now
rejects code it accepted in the previous version.  The upgrade that
installed the new GCC will typically update dozens of other system
components, many of which could be the culprit.  Just understanding
that the reason is GCC and its rejection of certain constructs is a
job that can take hours full of frustration and hair-pulling.  This is
what I think we as developers need to avoid as much as 

Re: More C type errors by default for GCC 14

2023-05-11 Thread Eli Zaretskii via Gcc
> Date: Thu, 11 May 2023 00:46:23 -0400
> Cc: gcc@gcc.gnu.org
> From: Eli Schwartz via Gcc 
> 
> > And remember that `-traditional' DID exist for a certain amount of time.
> > Then it was removed.  So in addition to annoying a lot of people, what
> > guarantees that -Wno-implicit will not be removed in the future, after
> > the proposed changes are made?
> 
> 
> What guarantees of the future do you have for anything?
> 
> What guarantees do you have that a meteor won't hit Earth and wipe out
> all human life in a great catastrophe?
> 
> What guarantees do you have that GCC will still be run by the current
> maintainers?
> 
> What guarantees do you have that GCC will still be maintained at all?
> 
> What guarantees do you have that GCC won't decide next year that they
> are deleting all support for std > c89, making -traditional the default,
> and becoming a historical recreation society?
> 
> What guarantees do you have that GCC won't decide next year that they
> are deleting all support for std < c23, mandating that everyone upgrade
> to the very latest std that isn't even fully implemented today?
> 
> What guarantees do you have that reality exists as you think of it?
> Maybe you are a pink elephant and computers are a figment of your
> imagination.

Please be serious, and please don't mock your opponents.  This is a
serious discussion of a serious subject, not a Twitter post.

Back to the subject: the guarantees I would personally like to have is
that the current GCC development team sees backward compatibility as
an important goal, and will try not to break old programs without very
good technical reasons.  At least in Emacs development, that is the
consideration that is very high on our priority list when making
development decisions.  It would be nice if GCC (and any other GNU
project, for that matter) would do the same, because being able to
upgrade important tools and packages without fear is something users
value very much.  Take it from someone who uses GCC on various
platforms since version 1.40.


Re: More C type errors by default for GCC 14

2023-05-11 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 23:14:20 -0400
> From: Eli Schwartz via Gcc 
> 
> Second of all, why is this GCC's problem? You are not a user of GCC,
> apparently.

He is telling you that removing support for these old features, you
draw users away from GCC and towards proprietary compilers.

One of the arguments in this thread _for_ dropping that support was
that by not rejecting those old programs, GCC draws some users away
from GCC.  He is telling you that this change will, perhaps, draw some
people to GCC, but will draw others away from GCC.  The difference is
that the former group will start using Clang, which is still free
software (at least some of its versions), whereas the latter group has
nowhere to go but to proprietary compilers.  So the FOSS community
will have suffered a net loss.  Something to consider, I think.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 14:37:50 -0400
> From: "James K. Lowden" 
> Cc: Jonathan Wakely 
> 
> On Tue, 9 May 2023 23:45:50 +0100
> Jonathan Wakely via Gcc  wrote:
> 
> > On Tue, 9 May 2023 at 23:38, Joel Sherrill wrote:
> > > We are currently using gcc 12 and specifying C11.  To experiment
> > > with these stricter warnings and slowly address them, would we need
> > > to build with a newer C version?
> > 
> > No, the proposed changes are to give errors (instead of warnings) for
> > rules introduced in C99. GCC is just two decades late in enforcing the
> > C99 rules properly!
> 
> This, it seems to me, is the crux of the question.  Code that does not
> conform to the standard should produce an error.

That's not what the standard says, and that's not how GCC behaves.
GCC has options to enforce the standard, but other than that, it
doesn't reject extensions and deviations from the standard.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 17:08:18 +
> From: Joseph Myers 
> CC: Jakub Jelinek , ,
>   , , ,
>   
> 
> On Wed, 10 May 2023, Eli Zaretskii via Gcc wrote:
> 
> > That is not the case we are discussing, AFAIU.  Or at least no one has
> > yet explained why accepting those old K programs will adversely
> > affect the ability of GCC to compile C2x programs.
> 
> At block scope,
> 
>   auto x = 1.5;
> 
> declares x to have type double in C2x (C++-style auto), but type int in 
> C89 (and is invalid for versions in between).  In this case, there is an 
> incompatible semantic change between implicit int and C++-style auto.  

So in this case, I'm okay with GCC changing the default behavior at
some point, such that the above is interpreted as C2x mandates, which
will then break some old programs.  This is another example of a "good
reason" for changing behavior in backward-incompatible ways.

But please note that emitting an error is not required, at least in my
book.  I assume GCC emits a warning about this already, and that
should be enough, until such time as you decide to adopt the C2x
interpretation of that by default -- without going through the
intermediate stage of erroring out by default.

> Giving an error before we make -std=gnu2x the default seems like a 
> particularly good idea, to further alert anyone who has been ignoring the 
> warnings about implicit int that semantics will change incompatibly.

FWIW, I don't see a reason to give an error.

> Enabling some of -Wall by default (as warnings, not errors) might well 
> also be beneficial to users, though case would be needed to exclude those 
> warnings that involve stylistic choices (e.g. -Wparentheses) or have false 
> positives that are hard to fix - not all of -Wall is for code that is 
> objectively suspicious independent of the chosen coding style.

IMO and IME, anything is better than errors.  I presume everyone on
this list is familiar with the frustrating experience of having a
large program suddenly fail to build with some strange-looking error
message, which launches you down the rabbit hole of trying to
understand what happened and why.  And if that happens as part of
running the configure script (as I understand is one of the potential
victims of that), that is even scarier, because most people don't read
the configure script and don't always understand what is going on
there and why; it is also not very easy to debug.

So I urge the GCC developers to try to avoid errors as much as
possible, as long as GCC is capable to produce code with some widely
adopted semantics, and break backward compatibility only if otherwise
GCC will be unable to implement newer features.  (And to be pedantic,
I don't consider new warnings to be new features in this context.)


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Richard Biener 
> Date: Wed, 10 May 2023 18:33:53 +0200
> Cc: Jakub Jelinek , gabrav...@gmail.com,
>  jwakely@gmail.com, fwei...@redhat.com, gcc@gcc.gnu.org, ar...@aarsen.me
> 
> 
> 
> > Am 10.05.2023 um 18:31 schrieb Eli Zaretskii via Gcc :
> > The examples you gave are the ones I could accept as "good reasons"
> > for breaking backward compatibility.  That's because breaking that is
> > unavoidable if GCC wants to support the newer standard.
> > 
> > That is not the case we are discussing, AFAIU.  Or at least no one has
> > yet explained why accepting those old K programs will adversely
> > affect the ability of GCC to compile C2x programs.
> 
> But we are discussing to reject K programs only when C99 or later standards 
> are applied (those are applied by default)

I understand, but I don't see the relevance.  Are you saying that
"-std=c99" accepts _only_ C99 valid constructs?  What about gnu99?


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 18:20:50 +0200
> From: David Brown via Gcc 
> 
> > Adding a flag to a Makefile is infinitely easier than fixing old
> > sources in a way that they produce the same machine code.
> 
> The suggestion has been - always - that support for old syntaxes be 
> retained.  But that flag should be added to the makefiles of the 0.01% 
> of projects that need it because they have old code - not the 99.99% of 
> projects that are written (or updated) this century.

Percents don't count when you are the one who is in trouble.

> > Exactly.  We cannot reasonably expect that a compiler which needs to
> > support 50 years of legacy code to be as safe as a compiler for a
> > language invented yesterday afternoon.  People who want a safe
> > programming environment should not choose C as their first choice.
> 
> We cannot expect a /language/ with a 50 year history to be as safe as a 
> modern one.  But we can expect a /compiler/ released /today/ to be as 
> safe as it can be made /today/.

Not if the compiler should support legacy code, we can't.

Anyway, we are repeating ourselves.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 18:02:53 +0200
> From: Jakub Jelinek 
> Cc: gabrav...@gmail.com, jwakely@gmail.com, fwei...@redhat.com,
> gcc@gcc.gnu.org, ar...@aarsen.me
> 
> > If some program is plainly invalid, not just because the criteria of
> > validity have shifted, then yes, such a program should be rejected.
> 
> Many of the accepts-invalid cases are when something used to be valid in some
> older standard and is not valid in a newer standard, often even changes
> meaning completely in even newer standard.
> Examples include e.g. the auto keyword, which means something completely
> different in C++11 and later than what it meant in C++98, or say comma in
> array reference in C++17 vs. C++20 vs. C++23 (a[1, 2] is the same as a[(1, 2)]
> in C++17, got deprecated in C++20 and is ill-formed or changed meaning
> in C++23 (multi-dimensional array operator).
> Or any time something that wasn't a keyword in older standard version
> and is a keyword in a newer standard.
> alignas/alignof/nullptr/static_assert/thread_local in C++11 and C23,
> char16_t/char32_t/constexpr/decltype/noexcept in C++11,
> constinit/consteval in C++20,
> bool/false/true/typeof_unqual in C23.
> 
> int bool = 1;
> is completely valid C17 if one doesn't include  header,
> or
> int static_assert = 2;
> valid C17 if one doesn't include 
> etc.  These used to compile and will no any longer wheen using -std=c2x or
> in a few years when -std=gnu23 becomes the default will not compile by
> default, even when it used to be valid C17.

The examples you gave are the ones I could accept as "good reasons"
for breaking backward compatibility.  That's because breaking that is
unavoidable if GCC wants to support the newer standard.

That is not the case we are discussing, AFAIU.  Or at least no one has
yet explained why accepting those old K programs will adversely
affect the ability of GCC to compile C2x programs.



Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 17:58:16 +0200
> From: David Brown via Gcc 
> 
> > In any case, I was not not talking about bug-compatibility, I was
> > talking about being able to compile code which GCC was able to compile
> > in past versions.  Being able to compile that code is not a bug, it's
> > a feature.
> 
> No, being able to compile /incorrect/ code by default is a bug.  It is 
> not helpful.

I actually agree; we just have different definitions of "incorrect".

> I've seen this kind of argument many times - "The compiler used to 
> accept my code and give the results I wanted, and now newer compiler 
> versions make a mess of it".

But we are not talking about some random code that just happened to
slip through cracks as a side effect of the particular implementation.
We are talking about code that was perfectly valid, had well-defined
semantics, and produced a working program.  I don't care about the
former; I do care about the latter.

> If the gcc developers really were required to continue to compile /all/ 
> programs that compiled before, with the same results, then the whole gcc 
> project can be stopped.

You will have to explain this to me.  Just stating this is not enough.
How will accepting K stop GCC development?

As for the two's complement wrapping example: I'm okay with having
this broken because some useful feature requires to modify the basic
arithmetics and instructions emitted by GCC in a way that two's
complement wrapping can no longer be supported.  _That_ is exactly an
example of a "good reason" for backward incompatibility: GCC must do
something to compile valid programs, and that something is
incompatible with old programs which depended on some de-facto
standard that is nowadays considered UB.  But the case in point is not
like that, AFAIU: in this case, GCC will deliberately break a program
although it could compile it without adversely affecting its output
for any other valid program.  To me, this would be an arbitrary
decision of the GCC developers to break someone's code that has no
"good reasons" which I could understand and respect, let alone accept.

> The only way to ensure perfect backwards compatibility would be to
> stop development, and no longer release any new versions of the
> compiler.  That is the logical consequence of "it used to compile
> (with defaults or a given set of flags), so it should continue to
> compile (with these same flags)" - assuming "compile" here means
> "giving the same resulting behaviour in the executable" rather than
> just "giving an executable that may or may not work".

This is not the logical consequence, this is reductio ad absurdum, a
kind of strawman.  There's no need to go to such extremes, because
"good reasons" for breaking backward compatibility do exist.  I'm a
co-maintainer of GNU Emacs, a program that attempts not to break
habits of users burned into their muscle memories for the last 30
years; don't you think I know a bit what I'm talking about?


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 16:31:23 +0200
> From: Thomas Koenig via Gcc 
> 
> On 10.05.23 14:03, Jakub Jelinek via Gcc wrote:
> > We do such changes several times a year, where we reject something that has
> > been previously accepted in older standards, admittedly mostly in C++.
> 
> ... and in Fortran.

Tell me about it.  Just a couple of months ago I needed to compile the
venerable Adventure game from 1977 sources (for my grandson).  That
was no fun, although there was, of course, nothing wrong with the
source code, and once I tweaked gfortran into accepting that flavor of
Fortran, it compiled and ran just fine.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 16:22:26 +0200
> From: Jakub Jelinek 
> Cc: Gabriel Ravier , jwakely@gmail.com,
> fwei...@redhat.com, gcc@gcc.gnu.org, ar...@aarsen.me
> 
> > > Are you seriously saying that no accepts-invalid bug should ever be 
> > > fixed under any circumstances on the basis that some programmers might 
> > > rely on code exploiting that bug ??
> > 
> > Sorry, I'm afraid I don't understand the question.  What are
> > "accepts-invalid bugs"?
> 
> They are bugs where compiler accepts something that isn't valid in
> the selected language nor considered valid extension.
> So, after the fix we reject something that has been accepted before.

If some program is plainly invalid, not just because the criteria of
validity have shifted, then yes, such a program should be rejected.

> What we are talking in this thread is also something not valid in C99 or
> later, for the int-conversion stuff not even valid in C89, in the past
> accepted just for K legacy reasons.

Yes, and that's the crucial (for me) difference: what is currently
considered invalid was valid in the past, and so there's a set of
rules under which that kind of program can produce valid machine code.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 15:30:02 +0200
> From: David Brown via Gcc 
> 
> >>> If some developers want to ignore warnings, it is not the business of
> >>> GCC to improve them, even if you are right in assuming that they will
> >>> not work around errors like they work around warnings (and I'm not at
> >>> all sure you are right in that assumption).  But by _forcing_ these
> >>> errors on _everyone_, GCC will in effect punish those developers who
> >>> have good reasons for not changing the code.
> 
> What would those "good reasons" be, in your opinion?

For example, something that adversely affects GCC itself and its
ability to compile valid programs.

> On the other hand, continuing to accept old, outdated code by lax 
> defaults is punishing /current/ developers and users.  Why should 99.99% 
> of current developers have to enable extra errors to catch mistakes (and 
> we all make occasional mistakes in our coding - so they /should/ be 
> enabling these error flags)?

Adding a flag to a Makefile is infinitely easier than fixing old
sources in a way that they produce the same machine code.

> I do agree that backwards compatibility breaks should only be done for 
> good reasons.  But I think the reasons are good.

Not good enough, not for such a radical shift in the balance between
the two groups.

> > And no,
> > educating/forcing GCC users to use more modern dialect of C is not a
> > good reason.
> > 
> 
> Yes, it /is/ a good reason.

Not for a compiler.  A compiler is a tool, it is none of its business
to teach me what is and what isn't a good dialect in each particular
case.  Hinting on that, via warnings, is sufficient and perfectly
okay, but _forcing_ me is not.

> Consider why Rust has become the modern fad in programming.  People 
> claim it is because it is inherently safer than C and C++.  It is not. 
> There are really two reasons for it appearing to be safer.  One is that 
> the /defaults/ for the tools, and the language idioms, are safer than 
> the /defaults/ for C and C++ tools.  That makes it harder to make 
> mistakes.  The other is that it has no legacy of decades of old code and 
> old habits, and no newbie programmers copying those old styles.

Exactly.  We cannot reasonably expect that a compiler which needs to
support 50 years of legacy code to be as safe as a compiler for a
language invented yesterday afternoon.  People who want a safe
programming environment should not choose C as their first choice.

> So yes, anything that pushes C programmers into being better C 
> programmers is worth considering, IMHO.  We will never stamp out bad 
> programming, but we can try to help them - giving them better tools that 
> help them spot problems early is a step forward.

I agree, I'm just saying that warnings are helpful enough -- for those
who want to be helped.

> > Once again: it isn't "broken code".  It is dangerous code, and in some
> > cases unintentionally suspicious code.  But it isn't broken, because
> > GCC can compile it into a valid program, which, if the programmer
> > indeed meant that, will work and do its job.
> 
> Sweeping problems under the carpet and hoping no one trips over the 
> bumps is, at best, pushing problems down the road for future developers.

I'm not sweeping anything.  This is not GCC's problem to solve, that's
all.  If the developer avoids dealing with this problem, then he or
she might be sweeping the problem under the carpet.  But this is not
GCC's problem.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Marcin Jaczewski 
> Date: Wed, 10 May 2023 14:41:40 +0200
> 
> Did you even check if the compiler outpost is still correct?

Yes.

> You mention "validations and verifications", do you do the same
> with the new compiler?

Yes.  But that is a fraction of the effort needed when the source
changes.

> If you can't touch code then you SHOULD not upgrade the compiler.

As I tried to explain, this is not really possible, unless the entire
system is also kept without any changes, which is also impossible,
because hardware gets old and needs to be replaced, and newer hardware
doesn't support old systems.

> Any big project (like Linux) shows these two rules are critical,
> there were multiple cases of security bugs caused by subtle change
> in behavior of the compiler.
> Compiling very old code is lability if nobody knows how it should work and
> nobody maintains it. Who can give you guarantee that the result is correct?
> Very old programs should even by default reject new compilers by default until
> someone does not check if it correctly compiles on new compilers.

This is all true, but the same problems exist even if the programs
don't use outdated C dialect.  So these issues are independent, almost
orthogonal to the issue at hand.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 14:41:27 +0200
> Cc: jwakely@gmail.com, fwei...@redhat.com, gcc@gcc.gnu.org,
>  ar...@aarsen.me
> From: Gabriel Ravier 
> 
> >>> Because GCC is capable of compiling it.
> >> That is not a good argument.  GCC is capable of compiling any code in all
> >> the reported accepts-invalid bugs on which it doesn't ICE.  That doesn't
> >> mean those bugs shouldn't be fixed.
> > Fixing those bugs, if they are bugs, is not the job of the compiler.
> > It's the job of the programmer, who is the one that knows what the
> > code was supposed to do.  If there's a significant risk that the code
> > is a mistake or might behave in problematic ways, a warning to that
> > effect is more than enough.
> 
> Are you seriously saying that no accepts-invalid bug should ever be 
> fixed under any circumstances on the basis that some programmers might 
> rely on code exploiting that bug ??

Sorry, I'm afraid I don't understand the question.  What are
"accepts-invalid bugs"?

In any case, I was not not talking about bug-compatibility, I was
talking about being able to compile code which GCC was able to compile
in past versions.  Being able to compile that code is not a bug, it's
a feature.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Sam James 
> Cc: David Brown , gcc@gcc.gnu.org
> Date: Wed, 10 May 2023 13:32:08 +0100
> 
> > I'm okay with making it harder, but without making it too hard for
> > those whose reasons for not changing the code are perfectly valid.
> > This proposal crosses that line, IMNSHO.
> 
> Could you give an example of how to make it harder without crossing
> the line for you?

Not really: I'm not involved enough in GCC development to be able to
provide such concrete examples off the top of my head.  I could think
about something like making it harder to disable the warnings about
such code, for example?


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Wed, 10 May 2023 13:30:10 +0100
> Cc: ar...@aarsen.me, dje@gmail.com, ja...@redhat.com, gcc@gcc.gnu.org
> 
> People are still using C to write new programs, and they are still
> making avoidable mistakes. The default for new code using new -std
> modes should be safer and less error prone.

I agree.  I just think that a warning strikes the right balance
between the two extremes.  And it is a balance, because once upon a
time, GCC didn't even warn about such code.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Neal Gompa 
> Date: Wed, 10 May 2023 08:10:48 -0400
> Cc: s...@gentoo.org, eg...@gwmail.gwu.edu, jwakely@gmail.com, 
>   j...@rtems.org, dje@gmail.com, ja...@redhat.com, ar...@aarsen.me, 
>   gcc@gcc.gnu.org, c-std-port...@lists.linux.dev
> 
> On Wed, May 10, 2023 at 8:05 AM Eli Zaretskii  wrote:
> >
> > > From: Neal Gompa 
> > > Date: Wed, 10 May 2023 06:56:32 -0400
> > > Cc: Eric Gallager , Jonathan Wakely 
> > > , j...@rtems.org,
> > >   David Edelsohn , Eli Zaretskii , 
> > > Jakub Jelinek ,
> > >   Arsen Arsenović , gcc@gcc.gnu.org,
> > >   c-std-port...@lists.linux.dev
> > >
> > > Right, we've been going through a similar effort with C++ over the
> > > past decade. GCC incrementally becoming more strict on C++ has been an
> > > incredibly painful experience, and it eats away a ton of time that I
> > > would have spent dealing with other problems. Having one big event
> > > where the majority of changes to make the C compiler strict happen
> > > will honestly make it less painful, even if it doesn't seem like it at
> > > the moment.
> >
> > But not having such an event, ever, would be even less painful.
> 
> That's not going to happen.

Well, I hope it will.  Otherwise I wouldn't be partaking in this
discussion.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 14:03:01 +0200
> From: Jakub Jelinek 
> Cc: Jonathan Wakely , fwei...@redhat.com,
> gcc@gcc.gnu.org, ar...@aarsen.me
> 
> > > Why should this compile?
> > 
> > Because GCC is capable of compiling it.
> 
> That is not a good argument.  GCC is capable of compiling any code in all
> the reported accepts-invalid bugs on which it doesn't ICE.  That doesn't
> mean those bugs shouldn't be fixed.

Fixing those bugs, if they are bugs, is not the job of the compiler.
It's the job of the programmer, who is the one that knows what the
code was supposed to do.  If there's a significant risk that the code
is a mistake or might behave in problematic ways, a warning to that
effect is more than enough.

> C99 for the above says:

I know what the standard says, but since when do we in the GNU project
accept standards as a dictate?  We do what we consider to be best for
our users, and follow the standards when that doesn't contradict what
we think is best for the users.  GCC has, for example, -std=gnu99
etc. precisely for that purpose.

> The proposal is essentially to stop accepting this as a GNU extension
> which was added for K compatibility I assume and do that only for C99 and
> later.

I understand.  I'm saying that there's no reason to make this an
error, because it will break builds that have good reasons for keeping
such code.

> Note, this isn't valid even in C89 and is already rejected with
> -pedantic-errors for years.

Terrific!  Rejecting such code given a non-default option is _exactly_
what should be done.  But we here are discussing the default behavior.

> > It compiles today with a warning, so that whoever is interested to fix
> > the code, can do that already.  The issue at hand is not whether to
> > flag the code as highly suspicious, the issue at hand is whether
> > upgrade the warning to errors.  So let's talk about the issue at hand,
> > not about something else, okay?
> 
> We do such changes several times a year, where we reject something that has
> been previously accepted in older standards, admittedly mostly in C++.

And that is a Good Thing?  I don't think so.  Maybe for C++ it's
inevitable, I'm not an expert on that.  But making breaking changes is
inherently BAD and should be avoided.

> Yes, it is done far less in C, but still, as the above is invalid already in
> C89, users had over 3 decades to fix their code, and in many cases they
> didn't and without this move they will never bother.

Please consider those cases where the code cannot be "fixed", in
practice.  I described one such situation in a previous message.

> A lot of such broken code has been even written in those 3 decades, doesn't
> predate it, but because the compiler just warned on it, it still appeared in
> the code bases.  If we wait with this change another 2 decades, nothing will
> change and we'll have the same problem then.

GCC is not responsible for the existence of that code.  So GCC
shouldn't change its decades-long behavior just because that code is
there.  there must be a much more serious reason for such changes,
something that affects GCC itself.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Wed, 10 May 2023 12:56:48 +0100
> Cc: Arsen Arsenović , dje@gmail.com, 
>   ja...@redhat.com, gcc@gcc.gnu.org
> 
> On Wed, 10 May 2023 at 12:51, Eli Zaretskii wrote:
> > Once again, it is not GCC's business to clean up the packages which
> > use GCC as the compiler.  GCC is a tool, and should allow any
> > legitimate use of it that could be useful to someone.  Warning about
> > dubious usage is perfectly fine, as it helps those who do that
> > unintentionally or due to ignorance.  But completely failing an
> > operation that could have produce valid code is too radical.
> 
> Again (are you even reading the replies?)

Please assume that I read everything, subject to email delivery times.
There's no reason for you to assume anything but good faith from my
side.

> GCC will not force anybody to change code, at most it this change
> would force them to consciously and intentionally say "I know this is
> not valid C code but I want to compile it anyway". By using a compiler
> option. This is not draconian, and you sound quite silly.

If we are not forcing code change, why bother with making it an error
at all?  The only reason for doing so that was provided was that this
_is_ a way of forcing people to change their programs.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Wed, 10 May 2023 12:49:52 +0100
> Cc: David Brown , gcc@gcc.gnu.org
> 
> > If some developers want to ignore warnings, it is not the business of
> > GCC to improve them, even if you are right in assuming that they will
> > not work around errors like they work around warnings (and I'm not at
> > all sure you are right in that assumption).  But by _forcing_ these
> > errors on _everyone_, GCC will in effect punish those developers who
> > have good reasons for not changing the code.
> 
> There will be options you can use to continue compiling the code
> without changing it. You haven't given a good reason why it's OK for
> one group of developers to have to use options to get their desired
> behaviour from GCC, but completely unacceptable for a different group
> to have to use options to get their desired behaviour.
> 
> This is just a change in defaults.

A change in defaults that is not backward-compatible should only be
done for very good reasons, because it breaks something that was
working for years.  No such good reasons were provided.  And no,
educating/forcing GCC users to use more modern dialect of C is not a
good reason.

> Accepting broken code by default is not a priori a good thing, as
> you seem to insist. Rejecting it by default is not a priori a good
> thing. There is a pragmatic choice to be made, and your argument is
> still no more than "it compiles today, so it should compile
> tomorrow".

Once again: it isn't "broken code".  It is dangerous code, and in some
cases unintentionally suspicious code.  But it isn't broken, because
GCC can compile it into a valid program, which, if the programmer
indeed meant that, will work and do its job.

> > > Agreed.  But if we can make it harder for them to release bad code,
> > > that's good overall.
> >
> > I'm okay with making it harder, but without making it too hard for
> > those whose reasons for not changing the code are perfectly valid.
> > This proposal crosses that line, IMNSHO.
> 
> Where "too hard" means using a compiler option. Seriously? This seems 
> farcical.

This goes both ways, of course.  GCC had -Werror since about forever.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Neal Gompa 
> Date: Wed, 10 May 2023 06:56:32 -0400
> Cc: Eric Gallager , Jonathan Wakely 
> , j...@rtems.org, 
>   David Edelsohn , Eli Zaretskii , Jakub 
> Jelinek , 
>   Arsen Arsenović , gcc@gcc.gnu.org, 
>   c-std-port...@lists.linux.dev
> 
> On Wed, May 10, 2023 at 6:48 AM Sam James  wrote:
> >
> > Neal Gompa wasn't keen on the idea at
> > https://lore.kernel.org/c-std-porting/CAEg-Je8=dQo-jAdu=od5dh+h9aqzge_4ghzgx_ow4ryjvpw...@mail.gmail.com/
> > because it'd feel like essentially "repeated punches".
> >
> > Maybe it'd work with some tweaks: I would, however, be more open to GCC 14 
> > having
> > implicit-function-declaration,implicit-int (these are so closely related
> > that it's not worth dividing the two up) and then say, GCC 15 having 
> > int-conversion and maybe
> > incompatible-pointer-types. But spreading it out too much is likely 
> > counterproductive.
> 
> Right, we've been going through a similar effort with C++ over the
> past decade. GCC incrementally becoming more strict on C++ has been an
> incredibly painful experience, and it eats away a ton of time that I
> would have spent dealing with other problems. Having one big event
> where the majority of changes to make the C compiler strict happen
> will honestly make it less painful, even if it doesn't seem like it at
> the moment.

But not having such an event, ever, would be even less painful.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Eric Gallager 
> Date: Wed, 10 May 2023 06:40:54 -0400
> Cc: j...@rtems.org, David Edelsohn , Eli Zaretskii 
> , 
>   Jakub Jelinek , Arsen Arsenović , 
>   "gcc@gcc.gnu.org" 
> 
> Idea for a compromise: What if, instead of flipping the switch on all
> 3 of these at once, we staggered them so that each one becomes a
> default in a separate release? i.e., something like:
> 
> - GCC 14: -Werror=implicit-function-declaration gets added to the defaults
> - GCC 15: -Werror=implicit-int gets added to the defaults
> - GCC 16: -Werror=int-conversion gets added to the defaults
> 
> That would give people more time to catch up on a particular warning,
> rather than overwhelming them with a whole bunch all at once. Just an
> idea.

What do we tell those who cannot possibly "catch up", for whatever
valid reasons?  E.g., consider a program written many years ago, which
is safety-critical, and where making any changes requires so many
validations and verifications that it is simply impractical, and will
never be done.  Why would we want to break such programs?

And that is just one example of perfectly valid reasons for not
wanting or not being able to make changes to pacify GCC.

Once again, my bother is not about "villains" who don't want to get
their act together, my bother is about cases such as the one above,
where the developers simply have no practical choice.

And please don't tell me they should use an older GCC, because as
systems go forward and are upgraded, older GCC will not work anymore.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Arsen Arsenović 
> Cc: dje@gmail.com, ja...@redhat.com, jwakely@gmail.com, 
> gcc@gcc.gnu.org
> Date: Wed, 10 May 2023 10:36:23 +0200
> 
> Eli Zaretskii  writes:
> 
> > It is not GCC's business to force developers of packages to get their
> > act together.
> 
> Why not?  Compilers are diagnostic tools besides just machines that
> guess what machine code you mean.

Diagnostics does not mean error.  A warning is also diagnostics.

> > It is the business of those package developers themselves.  GCC should
> > give those developers effective and convenient means of detecting any
> > unsafe and dubious code and of correcting it as they see fit.  Which
> > GCC already does by emitting warnings.
> 
> There's a difference between dubious and unsafe code and code that is
> unambiguously wrong, but was chosen to be accepted many years ago.

Is GCC indeed capable of reliably distinguishing between these two?

And that code is not unambiguously wrong, since it is valid under old
standards, and thus can be compiled, and did compile, into a working
program.

> > GCC should only error out if it is completely unable to produce valid
> > code, which is not the case here, since it has been producing valid
> > code for ages.
> 
> Producing call code with wrong prototypes is not within my definition of
> producing valid code.

I don't think I follow.  If the produced machine code is valid and
does what the programmer meant, why should we care about the
prototypes?

> > It is a disservice to GCC users if a program that compiled yesterday
> > and worked perfectly well suddenly cannot be built because GCC was
> > upgraded, perhaps due to completely unrelated reasons.
> 
> Please see the various porting-to pages.  Compilers stop being able to
> produce code with older versions of programs because of them being a
> lil' too lax and the programs accidentally relying on that every year.
> There's nothing wrong there.
> 
> If compilers stopped being lax, such things wouldn't happen simply
> because programs couldn't accidentally rely on it, so we'd get the ideal
> world without breakages.  We don't get that by pretending code is fine
> when it is not, and letting developers write that code.

Once again, it is not GCC's business to clean up the packages which
use GCC as the compiler.  GCC is a tool, and should allow any
legitimate use of it that could be useful to someone.  Warning about
dubious usage is perfectly fine, as it helps those who do that
unintentionally or due to ignorance.  But completely failing an
operation that could have produce valid code is too radical.

We all want that the code of the packages be clean and according to
standards.  But using draconian measures towards that goal is dead
wrong, and is basically against the libertarian spirit that gave birth
to Free Software.  It is not an accident that GPL doesn't disallow
writing badly written or even crashing programs.

> > It would be a grave mistake on the part of GCC to decide that part of
> > its mission is to teach package developers how to write their code and
> > when and how to modify it.
> 
> It would be a grave mistake on the part of GCC to decide that part of
> its mission is to pretend code is fine when it is unambiguously broken,
> and then not tell people about it very loudly.

It is not broken, certainly not "unambiguously".  It did compile and
work in the very recent past.

> I don't think we should send out the message of "GCC: the compiler for
> your untouchable legacy code, not for writing new code, or upgrading
> existing code".

GCC sends the messages "don't write bad or dubious code" by emitting
warnings about such code.  There's no need to have a virtual gun
pointed to the heads of package developers to make that message so
much stronger, because doing that shifts the balance from merely being
a good and friendly tool towards second-guessing all of the GCC users
and knowing better then they do what they want.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> Date: Wed, 10 May 2023 10:49:32 +0200
> From: David Brown via Gcc 
> 
> > People who ignore warnings will use options that disable these new
> > errors, exactly as they disable warnings.  So we will end up not
> > reaching the goal, but instead harming those who are well aware of the
> > warnings.
> 
> My experience is that many of the people who ignore warnings are not 
> particularly good developers, and not particularly good at 
> self-improvement.  They know how to ignore warnings - the attitude is 
> "if it really was a problem, the compiler would have given an error 
> message, not a mere warning".  They don't know how to disable error 
> messages, and won't bother to find out.  So they will, in fact, be a lot 
> more likely to fix their code.

If some developers want to ignore warnings, it is not the business of
GCC to improve them, even if you are right in assuming that they will
not work around errors like they work around warnings (and I'm not at
all sure you are right in that assumption).  But by _forcing_ these
errors on _everyone_, GCC will in effect punish those developers who
have good reasons for not changing the code.

> > IOW, if we are targeting people for whom warnings are not enough, then
> > we have already lost the battle.  Discipline cannot be forced by
> > technological means, because people will always work around.
> > 
> 
> Agreed.  But if we can make it harder for them to release bad code, 
> that's good overall.

I'm okay with making it harder, but without making it too hard for
those whose reasons for not changing the code are perfectly valid.
This proposal crosses that line, IMNSHO.


Re: More C type errors by default for GCC 14

2023-05-10 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Wed, 10 May 2023 09:04:12 +0100
> Cc: Florian Weimer , "gcc@gcc.gnu.org" , 
>   Jakub Jelinek , Arsen Arsenović 
> 
> void foo(int);
> void bar() { foo("42"); }
> 
> Why should this compile?

Because GCC is capable of compiling it.

> You keep demanding better rationale for the change, but your argument amounts 
> to nothing more than
> "it compiles today, it should compile tomorrow".

It compiles today with a warning, so that whoever is interested to fix
the code, can do that already.  The issue at hand is not whether to
flag the code as highly suspicious, the issue at hand is whether
upgrade the warning to errors.  So let's talk about the issue at hand,
not about something else, okay?


Re: More C type errors by default for GCC 14

2023-05-09 Thread Eli Zaretskii via Gcc
> From: Arsen Arsenović 
> Cc: Eli Zaretskii , Jakub Jelinek ,
>  jwakely@gmail.com, gcc@gcc.gnu.org
> Date: Tue, 09 May 2023 22:21:03 +0200
> 
> > The concern is using the good will of the GNU Toolchain brand as the tip of
> > the spear or battering ram to motivate software packages to fix their
> > problems. It's using GCC as leverage in a manner that is difficult for
> > package maintainers to avoid.  Maybe that's a necessary approach, but we
> > should be clear about the reasoning.  Again, I'm not objecting, but let's
> > clarify why we are choosing this approach.
> 
> Both the GNU Toolchain and the GNU Toolchain users will benefit from a
> stricter toolchain.
> 
> People can and have stopped using the GNU Toolchain due to lackluster
> and non-strict defaults.  This is certainly not positive for the brand,
> and I doubt it buys it much good will.

It is not GCC's business to force developers of packages to get their
act together.  It is the business of those package developers
themselves.  GCC should give those developers effective and convenient
means of detecting any unsafe and dubious code and of correcting it as
they see fit.  Which GCC already does by emitting warnings.  GCC
should only error out if it is completely unable to produce valid
code, which is not the case here, since it has been producing valid
code for ages.

It is a disservice to GCC users if a program that compiled yesterday
and worked perfectly well suddenly cannot be built because GCC was
upgraded, perhaps due to completely unrelated reasons.  It would be a
grave mistake on the part of GCC to decide that part of its mission is
to teach package developers how to write their code and when and how
to modify it.


Re: More C type errors by default for GCC 14

2023-05-09 Thread Eli Zaretskii via Gcc
> From: Florian Weimer 
> Cc: Jakub Jelinek ,  Eli Zaretskii ,
>   jwakely@gmail.com,  ar...@aarsen.me
> Date: Tue, 09 May 2023 22:57:20 +0200
> 
> * Eli Zaretskii via Gcc:
> 
> >> Date: Tue, 9 May 2023 21:07:07 +0200
> >> From: Jakub Jelinek 
> >> Cc: Jonathan Wakely , ar...@aarsen.me, 
> >> gcc@gcc.gnu.org
> >> 
> >> On Tue, May 09, 2023 at 10:04:06PM +0300, Eli Zaretskii via Gcc wrote:
> >> > People who ignore warnings will use options that disable these new
> >> > errors, exactly as they disable warnings.  So we will end up not
> >> 
> >> Some subset of them will surely do that.  But I think most people will just
> >> fix the code when they see hard errors, rather than trying to work around
> >> them.
> >
> > The same logic should work for warnings.  That's why we have warnings,
> > no?
> 
> People completely miss the warning and go to great lengths to show that
> what they are dealing is a compiler bug.  (I tried to elaborate on that
> in <87cz394b63@oldenburg.str.redhat.com>.)  If GCC errors out, that
> simply does not happen because there is no object code to examine.

And then people will start complaining about GCC unnecessarily
erroring out, which is a compiler bug, since there's no problem
producing correct code in these cases.


Re: More C type errors by default for GCC 14

2023-05-09 Thread Eli Zaretskii via Gcc
> Date: Tue, 9 May 2023 21:07:07 +0200
> From: Jakub Jelinek 
> Cc: Jonathan Wakely , ar...@aarsen.me, gcc@gcc.gnu.org
> 
> On Tue, May 09, 2023 at 10:04:06PM +0300, Eli Zaretskii via Gcc wrote:
> > > From: Jonathan Wakely 
> > > Date: Tue, 9 May 2023 18:15:59 +0100
> > > Cc: Arsen Arsenović , gcc@gcc.gnu.org
> > > 
> > > On Tue, 9 May 2023 at 17:56, Eli Zaretskii wrote:
> > > >
> > > > No one has yet explained why a warning about this is not enough, and
> > > > why it must be made an error.  Florian's initial post doesn't explain
> > > > that, and none of the followups did, although questions about whether
> > > > a warning is not already sufficient were asked.
> > > >
> > > > That's a simple question, and unless answered with valid arguments,
> > > > the proposal cannot make sense to me, at least.
> > > 
> > > People ignore warnings. That's why the problems have gone unfixed for
> > > so many years, and will continue to go unfixed if invalid code keeps
> > > compiling.
> > 
> > People who ignore warnings will use options that disable these new
> > errors, exactly as they disable warnings.  So we will end up not
> 
> Some subset of them will surely do that.  But I think most people will just
> fix the code when they see hard errors, rather than trying to work around
> them.

The same logic should work for warnings.  That's why we have warnings,
no?


Re: More C type errors by default for GCC 14

2023-05-09 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Tue, 9 May 2023 18:15:59 +0100
> Cc: Arsen Arsenović , gcc@gcc.gnu.org
> 
> On Tue, 9 May 2023 at 17:56, Eli Zaretskii wrote:
> >
> > No one has yet explained why a warning about this is not enough, and
> > why it must be made an error.  Florian's initial post doesn't explain
> > that, and none of the followups did, although questions about whether
> > a warning is not already sufficient were asked.
> >
> > That's a simple question, and unless answered with valid arguments,
> > the proposal cannot make sense to me, at least.
> 
> People ignore warnings. That's why the problems have gone unfixed for
> so many years, and will continue to go unfixed if invalid code keeps
> compiling.

People who ignore warnings will use options that disable these new
errors, exactly as they disable warnings.  So we will end up not
reaching the goal, but instead harming those who are well aware of the
warnings.

IOW, if we are targeting people for whom warnings are not enough, then
we have already lost the battle.  Discipline cannot be forced by
technological means, because people will always work around.


Re: More C type errors by default for GCC 14

2023-05-09 Thread Eli Zaretskii via Gcc
> From: Sam James 
> Cc: Arsen Arsenović , d...@killthe.net,
>  jwakely@gmail.com, gcc@gcc.gnu.org
> Date: Tue, 09 May 2023 18:05:09 +0100
> 
> Eli Zaretskii via Gcc  writes:
> 
> >> Cc: Jonathan Wakely , gcc@gcc.gnu.org
> >> Date: Tue, 09 May 2023 18:38:05 +0200
> >> From: Arsen Arsenović via Gcc 
> >> 
> >> You're actively dismissing the benefit.
> >
> > Which benefit?
> >
> > No one has yet explained why a warning about this is not enough, and
> > why it must be made an error.  Florian's initial post doesn't explain
> > that, and none of the followups did, although questions about whether
> > a warning is not already sufficient were asked.
> >
> > That's a simple question, and unless answered with valid arguments,
> > the proposal cannot make sense to me, at least.
> 
> My email covers this:
> https://gcc.gnu.org/pipermail/gcc/2023-May/241269.html.

If it does, I missed it, even upon second reading now.

Again, the question is: why warning is not enough?

> I'd also note that some of the issues I've seen were already flagged
> in people's CI but they didn't notice because it was just a warning.

The CI can run with non-default flags, if they don't pay attention to
warnings.  If that's the only reason, then I'm sorry, it is not strong
enough.


Re: More C type errors by default for GCC 14

2023-05-09 Thread Eli Zaretskii via Gcc
> Cc: Jonathan Wakely , gcc@gcc.gnu.org
> Date: Tue, 09 May 2023 18:38:05 +0200
> From: Arsen Arsenović via Gcc 
> 
> You're actively dismissing the benefit.

Which benefit?

No one has yet explained why a warning about this is not enough, and
why it must be made an error.  Florian's initial post doesn't explain
that, and none of the followups did, although questions about whether
a warning is not already sufficient were asked.

That's a simple question, and unless answered with valid arguments,
the proposal cannot make sense to me, at least.


Re: [PATCH 2/4] libbacktrace: detect executable path on windows

2023-01-24 Thread Eli Zaretskii via Gcc
> From: Ian Lance Taylor 
> Date: Tue, 24 Jan 2023 09:58:10 -0800
> Cc: g...@hazardy.de, gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> 
> I'd rather that the patch look like the appended.  Can someone with a
> Windows system test to see what that builds and passes the tests?

ENOPATCH


Re: [PATCH 2/4] libbacktrace: detect executable path on windows

2023-01-24 Thread Eli Zaretskii via Gcc-patches
> From: Ian Lance Taylor 
> Date: Tue, 24 Jan 2023 09:58:10 -0800
> Cc: g...@hazardy.de, gcc-patches@gcc.gnu.org, g...@gcc.gnu.org
> 
> I'd rather that the patch look like the appended.  Can someone with a
> Windows system test to see what that builds and passes the tests?

ENOPATCH


Re: [PATCH 2/4] libbacktrace: detect executable path on windows

2023-01-24 Thread Eli Zaretskii via Gcc
> From: Ian Lance Taylor 
> Date: Tue, 24 Jan 2023 06:35:21 -0800
> Cc: g...@hazardy.de, gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> 
> > > On Windows it seems that MAX_PATH is not
> > > a true limit, as an extended length path may be up to 32767 bytes.
> >
> > The limit of 32767 characters (not bytes, AFAIK) is only applicable
> > when using the Unicode (a.k.a. "wide") versions of the Windows Win32
> > APIs, see
> >
> >   
> > https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation
> >
> > Since the above code uses GetModuleFileNameA, which is an "ANSI"
> > single-byte API, it is still subject to the MAX_PATH limitation, and
> > MAX_PATH is defined as 260 on Windows headers.
> 
> Thanks.  Should this code be using GetModuleFileNameW?  Or would that
> mean that the later call to open will fail?

We'd need to use _wopen or somesuch, and the file name will have to be
a wchar_t array, not a char array, yes.  So this is not very practical
when file names need to be passed between functions, unless they are
converted to UTF-8 (and back again before using them in Windows APIs).

And note that even then, the 260-byte limit could be lifted only if
the user has a new enough Windows version _and_ has opted in to the
long-name feature by turning it on in the Registry.  Otherwise, file
names used in "wide" APIs can only break the 260-byte limit if they
use the special format "\\?\D:\foo\bar", which means file names
specified by user outside of the program or file names that come from
other programs will need to be reformatted to this special format.

> 260 bytes does not seem like very much for a path name these days.

That's true.  But complications with using longer file names are still
a PITA on Windows, even though they are a step closer to practically
possible.


Re: [PATCH 2/4] libbacktrace: detect executable path on windows

2023-01-24 Thread Eli Zaretskii via Gcc-patches
> From: Ian Lance Taylor 
> Date: Tue, 24 Jan 2023 06:35:21 -0800
> Cc: g...@hazardy.de, gcc-patches@gcc.gnu.org, g...@gcc.gnu.org
> 
> > > On Windows it seems that MAX_PATH is not
> > > a true limit, as an extended length path may be up to 32767 bytes.
> >
> > The limit of 32767 characters (not bytes, AFAIK) is only applicable
> > when using the Unicode (a.k.a. "wide") versions of the Windows Win32
> > APIs, see
> >
> >   
> > https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation
> >
> > Since the above code uses GetModuleFileNameA, which is an "ANSI"
> > single-byte API, it is still subject to the MAX_PATH limitation, and
> > MAX_PATH is defined as 260 on Windows headers.
> 
> Thanks.  Should this code be using GetModuleFileNameW?  Or would that
> mean that the later call to open will fail?

We'd need to use _wopen or somesuch, and the file name will have to be
a wchar_t array, not a char array, yes.  So this is not very practical
when file names need to be passed between functions, unless they are
converted to UTF-8 (and back again before using them in Windows APIs).

And note that even then, the 260-byte limit could be lifted only if
the user has a new enough Windows version _and_ has opted in to the
long-name feature by turning it on in the Registry.  Otherwise, file
names used in "wide" APIs can only break the 260-byte limit if they
use the special format "\\?\D:\foo\bar", which means file names
specified by user outside of the program or file names that come from
other programs will need to be reformatted to this special format.

> 260 bytes does not seem like very much for a path name these days.

That's true.  But complications with using longer file names are still
a PITA on Windows, even though they are a step closer to practically
possible.


Re: [PATCH 2/4] libbacktrace: detect executable path on windows

2023-01-24 Thread Eli Zaretskii via Gcc
> Date: Mon, 23 Jan 2023 15:00:56 -0800
> Cc: gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> From: Ian Lance Taylor via Gcc 
> 
> > +#ifdef HAVE_WINDOWS_H
> > +
> > +static char *
> > +windows_get_executable_path (char *buf, backtrace_error_callback 
> > error_callback,
> > +void *data)
> > +{
> > +  if (GetModuleFileNameA (NULL, buf, MAX_PATH - 1) == 0)
> > +{
> > +  error_callback (data,
> > + "could not get the filename of the current 
> > executable",
> > + (int) GetLastError ());
> > +  return NULL;
> > +}
> > +  return buf;
> > +}
> 
> Thanks, but this seems incomplete.  The docs for GetModuleFileNameA
> say that if the pathname is too long to fit into the buffer it returns
> the size of the buffer and sets the error to
> ERROR_INSUFFICIENT_BUFFER.  It seems to me that in that case we should
> allocate a larger buffer and try again.

This is correct in general, but not in this particular case.

> On Windows it seems that MAX_PATH is not
> a true limit, as an extended length path may be up to 32767 bytes.

The limit of 32767 characters (not bytes, AFAIK) is only applicable
when using the Unicode (a.k.a. "wide") versions of the Windows Win32
APIs, see

  
https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation

Since the above code uses GetModuleFileNameA, which is an "ANSI"
single-byte API, it is still subject to the MAX_PATH limitation, and
MAX_PATH is defined as 260 on Windows headers.


Re: [PATCH 2/4] libbacktrace: detect executable path on windows

2023-01-24 Thread Eli Zaretskii via Gcc-patches
> Date: Mon, 23 Jan 2023 15:00:56 -0800
> Cc: gcc-patches@gcc.gnu.org, g...@gcc.gnu.org
> From: Ian Lance Taylor via Gcc 
> 
> > +#ifdef HAVE_WINDOWS_H
> > +
> > +static char *
> > +windows_get_executable_path (char *buf, backtrace_error_callback 
> > error_callback,
> > +void *data)
> > +{
> > +  if (GetModuleFileNameA (NULL, buf, MAX_PATH - 1) == 0)
> > +{
> > +  error_callback (data,
> > + "could not get the filename of the current 
> > executable",
> > + (int) GetLastError ());
> > +  return NULL;
> > +}
> > +  return buf;
> > +}
> 
> Thanks, but this seems incomplete.  The docs for GetModuleFileNameA
> say that if the pathname is too long to fit into the buffer it returns
> the size of the buffer and sets the error to
> ERROR_INSUFFICIENT_BUFFER.  It seems to me that in that case we should
> allocate a larger buffer and try again.

This is correct in general, but not in this particular case.

> On Windows it seems that MAX_PATH is not
> a true limit, as an extended length path may be up to 32767 bytes.

The limit of 32767 characters (not bytes, AFAIK) is only applicable
when using the Unicode (a.k.a. "wide") versions of the Windows Win32
APIs, see

  
https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation

Since the above code uses GetModuleFileNameA, which is an "ANSI"
single-byte API, it is still subject to the MAX_PATH limitation, and
MAX_PATH is defined as 260 on Windows headers.


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-21 Thread Eli Zaretskii via Gcc
> Date: Sat, 21 Jan 2023 11:47:42 +0100
> Cc: g...@hazardy.de, gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> From: Gabriel Ravier 
> 
> 
> On 1/21/23 05:05, Eli Zaretskii wrote:
> >> Date: Fri, 20 Jan 2023 21:39:56 +0100
> >> Cc: g...@hazardy.de, gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> >> From: Gabriel Ravier 
> >>
>  - using wide APIs with Windows is generally considered to be a best
>  practice, even when not strictly needed (and in this case I can't see
>  any problem with doing so, unless maybe we want to code to work with
>  Windows 95 or something like that...)
> >>> There's no reason to forcibly break GDB on platforms where wide APIs
> >>> are not available.
> >> Are there even any platforms that have GetModuleHandleA but not
> >> GetModuleHandleW ? MSDN states that Windows XP and Windows Server 2003
> >> are the first versions to support both of the APIs, so if this is
> >> supposed to work on Windows 98, for instance, whether we're using
> >> GetModuleHandleA or GetModuleHandleW won't matter.
> > I'm not sure I follow the logic.  A program that calls
> > GetModuleHandleW will refuse to start on Windows that doesn't have
> > that API.  So any version before XP is automatically excluded the
> > moment you use code which calls that API directly (i.e. not through a
> > function pointer or somesuch).
> A program that calls GetModuleHandleA will also refuse to start on 
> Windows if it doesn't have that API. The set of Windows versions that do 
> not have GetModuleHandleA is, according to MSDN, the same as the set of 
> Windows versions that do not have GetModuleHandleW.

MSDN lies (because it wants to pretend that older versions don't
exist).  Try this much more useful site:

  http://winapi.freetechsecrets.com/win32/WIN32GetModuleHandle.htm


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-21 Thread Eli Zaretskii via Gcc-patches
> Date: Sat, 21 Jan 2023 11:47:42 +0100
> Cc: g...@hazardy.de, gcc-patches@gcc.gnu.org, g...@gcc.gnu.org
> From: Gabriel Ravier 
> 
> 
> On 1/21/23 05:05, Eli Zaretskii wrote:
> >> Date: Fri, 20 Jan 2023 21:39:56 +0100
> >> Cc: g...@hazardy.de, gcc-patches@gcc.gnu.org, g...@gcc.gnu.org
> >> From: Gabriel Ravier 
> >>
>  - using wide APIs with Windows is generally considered to be a best
>  practice, even when not strictly needed (and in this case I can't see
>  any problem with doing so, unless maybe we want to code to work with
>  Windows 95 or something like that...)
> >>> There's no reason to forcibly break GDB on platforms where wide APIs
> >>> are not available.
> >> Are there even any platforms that have GetModuleHandleA but not
> >> GetModuleHandleW ? MSDN states that Windows XP and Windows Server 2003
> >> are the first versions to support both of the APIs, so if this is
> >> supposed to work on Windows 98, for instance, whether we're using
> >> GetModuleHandleA or GetModuleHandleW won't matter.
> > I'm not sure I follow the logic.  A program that calls
> > GetModuleHandleW will refuse to start on Windows that doesn't have
> > that API.  So any version before XP is automatically excluded the
> > moment you use code which calls that API directly (i.e. not through a
> > function pointer or somesuch).
> A program that calls GetModuleHandleA will also refuse to start on 
> Windows if it doesn't have that API. The set of Windows versions that do 
> not have GetModuleHandleA is, according to MSDN, the same as the set of 
> Windows versions that do not have GetModuleHandleW.

MSDN lies (because it wants to pretend that older versions don't
exist).  Try this much more useful site:

  http://winapi.freetechsecrets.com/win32/WIN32GetModuleHandle.htm


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-21 Thread Eli Zaretskii via Gcc
> Date: Sat, 21 Jan 2023 17:18:14 +0800
> Cc: g...@hazardy.de, gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> From: LIU Hao 
> 
> 在 2023-01-21 12:05, Eli Zaretskii via Gcc 写道:
> > I'm not sure I follow the logic.  A program that calls
> > GetModuleHandleW will refuse to start on Windows that doesn't have
> > that API.  So any version before XP is automatically excluded the
> > moment you use code which calls that API directly (i.e. not through a
> > function pointer or somesuch).
> 
> Are _you_ still willing to maintain backward compatibility with Windows 9x? 
> Even mingw-w64 has been 
> defaulting to Windows Server 2003 since 2007. Why would anyone build a modern 
> compiler for such old 
> operating systems?

I'm only saying that we should not deliberately break those old
platforms unless we have a good reason.  And I see no such good reason
in this case: GetModuleHandleA will do the job exactly like
GetModuleHandleW will.

> With any Windows that is modern enough, wide APIs should always be preferred 
> to ANSI ones, 
> especially when the argument is constant. Almost all ANSI APIs (the only 
> exception I know of is 
> `OutputDebugStringA` which does the inverse) translate their ANSI string 
> arguments to wide strings 
> and delegate to wide ones, so by calling wide APIs explicitly, such overhead 
> can be avoided.

The overhead is only relevant in code that is run in performance
critical places.  I don't think this is such a place.


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-21 Thread Eli Zaretskii via Gcc-patches
> Date: Sat, 21 Jan 2023 17:18:14 +0800
> Cc: g...@hazardy.de, gcc-patches@gcc.gnu.org, g...@gcc.gnu.org
> From: LIU Hao 
> 
> 在 2023-01-21 12:05, Eli Zaretskii via Gcc 写道:
> > I'm not sure I follow the logic.  A program that calls
> > GetModuleHandleW will refuse to start on Windows that doesn't have
> > that API.  So any version before XP is automatically excluded the
> > moment you use code which calls that API directly (i.e. not through a
> > function pointer or somesuch).
> 
> Are _you_ still willing to maintain backward compatibility with Windows 9x? 
> Even mingw-w64 has been 
> defaulting to Windows Server 2003 since 2007. Why would anyone build a modern 
> compiler for such old 
> operating systems?

I'm only saying that we should not deliberately break those old
platforms unless we have a good reason.  And I see no such good reason
in this case: GetModuleHandleA will do the job exactly like
GetModuleHandleW will.

> With any Windows that is modern enough, wide APIs should always be preferred 
> to ANSI ones, 
> especially when the argument is constant. Almost all ANSI APIs (the only 
> exception I know of is 
> `OutputDebugStringA` which does the inverse) translate their ANSI string 
> arguments to wide strings 
> and delegate to wide ones, so by calling wide APIs explicitly, such overhead 
> can be avoided.

The overhead is only relevant in code that is run in performance
critical places.  I don't think this is such a place.


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-20 Thread Eli Zaretskii via Gcc
> Date: Fri, 20 Jan 2023 21:39:56 +0100
> Cc: g...@hazardy.de, gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> From: Gabriel Ravier 
> 
> >> - using wide APIs with Windows is generally considered to be a best
> >> practice, even when not strictly needed (and in this case I can't see
> >> any problem with doing so, unless maybe we want to code to work with
> >> Windows 95 or something like that...)
> > There's no reason to forcibly break GDB on platforms where wide APIs
> > are not available.
> Are there even any platforms that have GetModuleHandleA but not 
> GetModuleHandleW ? MSDN states that Windows XP and Windows Server 2003 
> are the first versions to support both of the APIs, so if this is 
> supposed to work on Windows 98, for instance, whether we're using 
> GetModuleHandleA or GetModuleHandleW won't matter.

I'm not sure I follow the logic.  A program that calls
GetModuleHandleW will refuse to start on Windows that doesn't have
that API.  So any version before XP is automatically excluded the
moment you use code which calls that API directly (i.e. not through a
function pointer or somesuch).


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-20 Thread Eli Zaretskii via Gcc-patches
> Date: Fri, 20 Jan 2023 21:39:56 +0100
> Cc: g...@hazardy.de, gcc-patches@gcc.gnu.org, g...@gcc.gnu.org
> From: Gabriel Ravier 
> 
> >> - using wide APIs with Windows is generally considered to be a best
> >> practice, even when not strictly needed (and in this case I can't see
> >> any problem with doing so, unless maybe we want to code to work with
> >> Windows 95 or something like that...)
> > There's no reason to forcibly break GDB on platforms where wide APIs
> > are not available.
> Are there even any platforms that have GetModuleHandleA but not 
> GetModuleHandleW ? MSDN states that Windows XP and Windows Server 2003 
> are the first versions to support both of the APIs, so if this is 
> supposed to work on Windows 98, for instance, whether we're using 
> GetModuleHandleA or GetModuleHandleW won't matter.

I'm not sure I follow the logic.  A program that calls
GetModuleHandleW will refuse to start on Windows that doesn't have
that API.  So any version before XP is automatically excluded the
moment you use code which calls that API directly (i.e. not through a
function pointer or somesuch).


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-20 Thread Eli Zaretskii via Gcc-patches
> Date: Fri, 20 Jan 2023 17:46:59 +0100
> Cc: gcc-patches@gcc.gnu.org, g...@gcc.gnu.org
> From: Gabriel Ravier 
> 
> On 1/20/23 14:39, Eli Zaretskii via Gcc wrote:
> >> From: Björn Schäpers 
> >> Date: Fri, 20 Jan 2023 11:54:08 +0100
> >>
> >> @@ -856,7 +870,12 @@ coff_add (struct backtrace_state *state, int 
> >> descriptor,
> >>  + (sections[i].offset - min_offset));
> >>   }
> >>   
> >> -  if (!backtrace_dwarf_add (state, /* base_address */ 0, _sections,
> >> +#ifdef HAVE_WINDOWS_H
> >> +module_handle = (uintptr_t) GetModuleHandleW (NULL);
> >> +base_address = module_handle - image_base;
> >> +#endif
> >> +
> >> +  if (!backtrace_dwarf_add (state, base_address, _sections,
> >>0, /* FIXME: is_bigendian */
> >>NULL, /* altlink */
> >>error_callback, data, fileline_fn,
> > Why do you force using the "wide" APIs here?  Won't GetModuleHandle do
> > the job, whether it resolves to GetModuleHandleA or GetModuleHandleW?
> 
> I would expect the reason to be either that:
> 
> - using wide APIs with Windows is generally considered to be a best 
> practice, even when not strictly needed (and in this case I can't see 
> any problem with doing so, unless maybe we want to code to work with 
> Windows 95 or something like that...)

There's no reason to forcibly break GDB on platforms where wide APIs
are not available.

> - using the narrow API somehow has an actual drawback, for example maybe 
> it might not work if the name of the exe file the NULL will tell it to 
> get a handle to contains wide characters

Native Windows port of GDB doesn't support Unicode file names anyway,
which is why you used the *A APIs elsewhere in the patch, and
rightfully so.  So there's no reason to use "wide" APIs in this one
place, and every reason not to.


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-20 Thread Eli Zaretskii via Gcc
> Date: Fri, 20 Jan 2023 17:46:59 +0100
> Cc: gcc-patc...@gcc.gnu.org, gcc@gcc.gnu.org
> From: Gabriel Ravier 
> 
> On 1/20/23 14:39, Eli Zaretskii via Gcc wrote:
> >> From: Björn Schäpers 
> >> Date: Fri, 20 Jan 2023 11:54:08 +0100
> >>
> >> @@ -856,7 +870,12 @@ coff_add (struct backtrace_state *state, int 
> >> descriptor,
> >>  + (sections[i].offset - min_offset));
> >>   }
> >>   
> >> -  if (!backtrace_dwarf_add (state, /* base_address */ 0, _sections,
> >> +#ifdef HAVE_WINDOWS_H
> >> +module_handle = (uintptr_t) GetModuleHandleW (NULL);
> >> +base_address = module_handle - image_base;
> >> +#endif
> >> +
> >> +  if (!backtrace_dwarf_add (state, base_address, _sections,
> >>0, /* FIXME: is_bigendian */
> >>NULL, /* altlink */
> >>error_callback, data, fileline_fn,
> > Why do you force using the "wide" APIs here?  Won't GetModuleHandle do
> > the job, whether it resolves to GetModuleHandleA or GetModuleHandleW?
> 
> I would expect the reason to be either that:
> 
> - using wide APIs with Windows is generally considered to be a best 
> practice, even when not strictly needed (and in this case I can't see 
> any problem with doing so, unless maybe we want to code to work with 
> Windows 95 or something like that...)

There's no reason to forcibly break GDB on platforms where wide APIs
are not available.

> - using the narrow API somehow has an actual drawback, for example maybe 
> it might not work if the name of the exe file the NULL will tell it to 
> get a handle to contains wide characters

Native Windows port of GDB doesn't support Unicode file names anyway,
which is why you used the *A APIs elsewhere in the patch, and
rightfully so.  So there's no reason to use "wide" APIs in this one
place, and every reason not to.


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-20 Thread Eli Zaretskii via Gcc-patches
> From: Björn Schäpers 
> Date: Fri, 20 Jan 2023 11:54:08 +0100
> 
> @@ -856,7 +870,12 @@ coff_add (struct backtrace_state *state, int descriptor,
> + (sections[i].offset - min_offset));
>  }
>  
> -  if (!backtrace_dwarf_add (state, /* base_address */ 0, _sections,
> +#ifdef HAVE_WINDOWS_H
> +module_handle = (uintptr_t) GetModuleHandleW (NULL);
> +base_address = module_handle - image_base;
> +#endif
> +
> +  if (!backtrace_dwarf_add (state, base_address, _sections,
>   0, /* FIXME: is_bigendian */
>   NULL, /* altlink */
>   error_callback, data, fileline_fn,

Why do you force using the "wide" APIs here?  Won't GetModuleHandle do
the job, whether it resolves to GetModuleHandleA or GetModuleHandleW?


Re: [PATCH 3/4] libbacktrace: work with aslr on windows

2023-01-20 Thread Eli Zaretskii via Gcc
> From: Björn Schäpers 
> Date: Fri, 20 Jan 2023 11:54:08 +0100
> 
> @@ -856,7 +870,12 @@ coff_add (struct backtrace_state *state, int descriptor,
> + (sections[i].offset - min_offset));
>  }
>  
> -  if (!backtrace_dwarf_add (state, /* base_address */ 0, _sections,
> +#ifdef HAVE_WINDOWS_H
> +module_handle = (uintptr_t) GetModuleHandleW (NULL);
> +base_address = module_handle - image_base;
> +#endif
> +
> +  if (!backtrace_dwarf_add (state, base_address, _sections,
>   0, /* FIXME: is_bigendian */
>   NULL, /* altlink */
>   error_callback, data, fileline_fn,

Why do you force using the "wide" APIs here?  Won't GetModuleHandle do
the job, whether it resolves to GetModuleHandleA or GetModuleHandleW?


Re: gcc parameter -mcrtdll= for choosing Windows C RunTime DLL library

2022-11-20 Thread Eli Zaretskii via Gcc
> Date: Sun, 20 Nov 2022 16:44:08 +0100
> From: Pali Rohár 
> Cc: gcc@gcc.gnu.org, mingw-w64-pub...@lists.sourceforge.net
> 
> > Installing a redistributable is a nuisance, and dependence on non-system
> > libraries might make the program non-free.
> 
> On new windows versions they may be preinstalled (depends on newness of
> windows version).

I'm talking about older ones.  It is customary nowadays to build on Windows
11 and then run on Windows 8.

> And if your application uses features unavailable in
> older (or default) crt versions then this make application code
> simplifier. Also redistributable packages are in most cases installed by
> Windows update mechanism, which could be marked as system library. But
> well, this is more license discussion than development discussion...

I mentioned that because people might inadvertently build GPL'ed GNU
software using this option, and violate the GPL without knowing it.  This is
relevant to those who read this list and port GNU software to MS-Windows.

> > > Note that with this option, you can also choose older version than the
> > > default one (WinXP msvcrt.dll). So e.g. you can choose msvcrt20.dll or
> > > crtdll.dll for older Windows version.
> > 
> > Using the OS default MSVCRT already gets me that, at zero cost.
> 
> Here "OS default MSVCRT" means Windows XP MSVCRT.DLL.
> 
> On older windows versions there is no pre-installed MSVCRT.DLL. There
> is MSVCRT20.DLL or CRTDLL.DLL (based on oldness of windows version). So
> it is not at zero cost, you have yo either do that nuisance and install
> MSVCRT.DLL as you write above or switch to older CRT version which is in
> OS preinstalled.

I never saw any problems with programs linked against MSVCRT.DLL, on all
versions of Windows from XP up to Windows 10.  None.


Re: gcc parameter -mcrtdll= for choosing Windows C RunTime DLL library

2022-11-20 Thread Eli Zaretskii via Gcc
> Date: Sun, 20 Nov 2022 16:04:11 +0100
> From: Pali Rohár 
> Cc: gcc@gcc.gnu.org, mingw-w64-pub...@lists.sourceforge.net
> 
> On Sunday 20 November 2022 16:45:55 Eli Zaretskii wrote:
> > > Date: Sun, 20 Nov 2022 13:53:48 +0100
> > > From: Pali Rohár via Gcc 
> > > 
> > Linking a program against a specific runtime means the produced binary will
> > not run on Windows systems older than the one where it was linked.  Why is
> > such a limitation a good idea, may I ask?
> 
> It will run also on older Windows system if you install redistributable
> runtime library. Which in most cases is already installed because other
> programs use it.

Installing a redistributable is a nuisance, and dependence on non-system
libraries might make the program non-free.

> And why you want a new version? Because of better C99/C11 support which
> is in ucrtbase.dll

That comes with a price, though.

> Note that with this option, you can also choose older version than the
> default one (WinXP msvcrt.dll). So e.g. you can choose msvcrt20.dll or
> crtdll.dll for older Windows version.

Using the OS default MSVCRT already gets me that, at zero cost.


Re: gcc parameter -mcrtdll= for choosing Windows C RunTime DLL library

2022-11-20 Thread Eli Zaretskii via Gcc
> Date: Sun, 20 Nov 2022 13:53:48 +0100
> From: Pali Rohár via Gcc 
> 
> Hello! I would like to propose a new parameter for gcc: -mcrtdll= to
> allow specifying against which Windows C Runtime library should be
> binary linked. On Windows there are more crt libraries and currently gcc
> links to libmsvcrt.a which is in most cases symlink to libmsvcrt-os.a
> (but can be changed, e.g. during mingw-w64 building). mingw-w64 project
> already builds import .a library for every crt dll library (from the old
> crtdll.dll up to the new ucrtbase.dll), so it is ready for usage. Simple
> patch for gcc which implements -mcrtdll parameter is below. Note that on
> internet are other very similar patches for -mcrtdll= parameters and
> some are parts of custom mingw32 / mingw-w64 gcc builds. What do you
> think? Could gcc have "official" support for -mcrtdll= parameter?

Linking a program against a specific runtime means the produced binary will
not run on Windows systems older than the one where it was linked.  Why is
such a limitation a good idea, may I ask?


Re: Compilation of rust-demangle.c fails on MinGW

2021-08-04 Thread Eli Zaretskii via Gcc-bugs
> From: Andrew Pinski 
> Date: Wed, 4 Aug 2021 11:57:47 -0700
> Cc: Jonathan Wakely , GCC Bugs , 
>   Andreas Schwab 
> 
> > > https://gcc.gnu.org/bugs/ instead.
> >
> > Which points to GCC Bugzilla, which doesn't have a "libiberty"
> > component.  So I suggest to add such a component on the Bugzilla.
> 
> It has a demangler component though.

Thanks, submitted the bug report for the demangler component.


Re: Compilation of rust-demangle.c fails on MinGW

2021-08-04 Thread Eli Zaretskii via Gcc-bugs
> Date: Wed, 4 Aug 2021 15:41:30 +0100
> From: Jonathan Wakely 
> Cc: gcc-bugs@gcc.gnu.org, e...@gnu.org
> 
> > The libiberty README says to report bugs to gcc-bugs@gcc.gnu.org.
> 
> Well that needs to be fixed. It should point to
> https://gcc.gnu.org/bugs/ instead.

Which points to GCC Bugzilla, which doesn't have a "libiberty"
component.  So I suggest to add such a component on the Bugzilla.


Re: Compilation of rust-demangle.c fails on MinGW

2021-08-04 Thread Eli Zaretskii via Gcc-bugs
> From: Andreas Schwab 
> Cc: Richard Sandiford ,  Eli Zaretskii
>  
> Date: Wed, 04 Aug 2021 15:35:04 +0200
> 
> On Aug 04 2021, Eli Zaretskii via Gcc-bugs wrote:
> 
> > I'd love to, but please tell me where.  I couldn't find any
> > information about reporting libiberty bugs, sorry if I missed
> > something obvious.
> 
> The libiberty README says to report bugs to gcc-bugs@gcc.gnu.org.

Yes, and that's what I did eventually.  But I was then told by Richard
Sandiford that that address is "mostly a bugzilla feed and so isn't widely
read".


Re: Compilation of rust-demangle.c fails on MinGW

2021-08-04 Thread Eli Zaretskii via Gcc-bugs
> Date: Wed, 4 Aug 2021 14:03:21 +0100
> From: Jonathan Wakely 
> Cc: gcc-bugs@gcc.gnu.org
> 
> In GCC's bugzilla.

That's what I tried originally, but there's no libiberty there among
the various "components".  So I decided the GCC Bugzilla was not the
right place.  If it is the right place, please tell which component to
select.  Maybe "other"?  (But it's confusing that libiberty is not in
the list.)

Thanks.


Re: Compilation of rust-demangle.c fails on MinGW

2021-08-04 Thread Eli Zaretskii via Gcc-bugs
> From: Richard Sandiford 
> Cc: Eli Zaretskii 
> Date: Wed, 04 Aug 2021 13:04:24 +0100
> 
> Eli Zaretskii via Gcc-bugs  writes:
> > The version of rust-demangle.c included with Binutils 2.37 doesn't
> > compile with MinGW:
> >
> >  mingw32-gcc -c -DHAVE_CONFIG_H -O2 -gdwarf-4 -g3  -I. 
> > -I../../binutils-2.37/libiberty/../include   -W -Wall -Wwrite-strings 
> > -Wc++-compat -Wstrict-prototypes -Wshadow=local -pedantic  -D_GNU_SOURCE   
> > ../../binutils-2.37/libiberty/rust-demangle.c -o rust-demangle.o
> >  ../../binutils-2.37/libiberty/rust-demangle.c:84:3: error: unknown 
> > type name 'uint'
> > 84 |   uint recursion;
> >|   ^~~~
> >  ../../binutils-2.37/libiberty/rust-demangle.c: In function 
> > 'demangle_path':
> >  ../../binutils-2.37/libiberty/rust-demangle.c:87:37: error: 'uint' 
> > undeclared (first use in this function); did you mean 'int'?
> > 87 | #define RUST_NO_RECURSION_LIMIT   ((uint) -1)
> >| ^~~~
> >  ../../binutils-2.37/libiberty/rust-demangle.c:686:25: note: in 
> > expansion of macro 'RUST_NO_RECURSION_LIMIT'
> >686 |   if (rdm->recursion != RUST_NO_RECURSION_LIMIT)
> >| ^~~
> >  ../../binutils-2.37/libiberty/rust-demangle.c:87:37: note: each 
> > undeclared identifier is reported only once for each function it appears in
> > 87 | #define RUST_NO_RECURSION_LIMIT   ((uint) -1)
> >| ^~~~
> >  ../../binutils-2.37/libiberty/rust-demangle.c:686:25: note: in 
> > expansion of macro 'RUST_NO_RECURSION_LIMIT'
> >686 |   if (rdm->recursion != RUST_NO_RECURSION_LIMIT)
> >| ^~~
> >  ../../binutils-2.37/libiberty/rust-demangle.c: In function 
> > 'rust_demangle_callback':
> >  ../../binutils-2.37/libiberty/rust-demangle.c:87:37: error: 'uint' 
> > undeclared (first use in this function); did you mean 'int'?
> > 87 | #define RUST_NO_RECURSION_LIMIT   ((uint) -1)
> >| ^~~~
> >  ../../binutils-2.37/libiberty/rust-demangle.c:1347:55: note: in 
> > expansion of macro 'RUST_NO_RECURSION_LIMIT'
> >   1347 |   rdm.recursion = (options & DMGL_NO_RECURSE_LIMIT) ? 
> > RUST_NO_RECURSION_LIMIT : 0;
> >|   
> > ^~~
> >
> > This is because the data type 'uint' is not defined in the MinGW
> > headers.  I used uint32_t instead, and it compiled OK.
> 
> This list is mostly just a bugzilla feed and so isn't widely read.
> Could you file a PR?

I'd love to, but please tell me where.  I couldn't find any
information about reporting libiberty bugs, sorry if I missed
something obvious.


Compilation of rust-demangle.c fails on MinGW

2021-07-31 Thread Eli Zaretskii via Gcc-bugs
The version of rust-demangle.c included with Binutils 2.37 doesn't
compile with MinGW:

 mingw32-gcc -c -DHAVE_CONFIG_H -O2 -gdwarf-4 -g3  -I. 
-I../../binutils-2.37/libiberty/../include   -W -Wall -Wwrite-strings 
-Wc++-compat -Wstrict-prototypes -Wshadow=local -pedantic  -D_GNU_SOURCE   
../../binutils-2.37/libiberty/rust-demangle.c -o rust-demangle.o
 ../../binutils-2.37/libiberty/rust-demangle.c:84:3: error: unknown type 
name 'uint'
84 |   uint recursion;
   |   ^~~~
 ../../binutils-2.37/libiberty/rust-demangle.c: In function 'demangle_path':
 ../../binutils-2.37/libiberty/rust-demangle.c:87:37: error: 'uint' 
undeclared (first use in this function); did you mean 'int'?
87 | #define RUST_NO_RECURSION_LIMIT   ((uint) -1)
   | ^~~~
 ../../binutils-2.37/libiberty/rust-demangle.c:686:25: note: in expansion 
of macro 'RUST_NO_RECURSION_LIMIT'
   686 |   if (rdm->recursion != RUST_NO_RECURSION_LIMIT)
   | ^~~
 ../../binutils-2.37/libiberty/rust-demangle.c:87:37: note: each undeclared 
identifier is reported only once for each function it appears in
87 | #define RUST_NO_RECURSION_LIMIT   ((uint) -1)
   | ^~~~
 ../../binutils-2.37/libiberty/rust-demangle.c:686:25: note: in expansion 
of macro 'RUST_NO_RECURSION_LIMIT'
   686 |   if (rdm->recursion != RUST_NO_RECURSION_LIMIT)
   | ^~~
 ../../binutils-2.37/libiberty/rust-demangle.c: In function 
'rust_demangle_callback':
 ../../binutils-2.37/libiberty/rust-demangle.c:87:37: error: 'uint' 
undeclared (first use in this function); did you mean 'int'?
87 | #define RUST_NO_RECURSION_LIMIT   ((uint) -1)
   | ^~~~
 ../../binutils-2.37/libiberty/rust-demangle.c:1347:55: note: in expansion 
of macro 'RUST_NO_RECURSION_LIMIT'
  1347 |   rdm.recursion = (options & DMGL_NO_RECURSE_LIMIT) ? 
RUST_NO_RECURSION_LIMIT : 0;
   |   
^~~

This is because the data type 'uint' is not defined in the MinGW
headers.  I used uint32_t instead, and it compiled OK.


Re: Benefits of using Sphinx documentation format

2021-07-13 Thread Eli Zaretskii via Gcc
> From: Richard Biener 
> Date: Tue, 13 Jul 2021 14:46:33 +0200
> Cc: Jonathan Wakely , GCC Development 
> I can very well understand the use of the html manual when you want
> to share pointers to specific parts of the documentation in communications.

In the Emacs community, we have a notation for a pointer to an Info
node FOO in the manual BAR that Emacs understands -- you just hit a
key, and Emacs lands you there.  So if you use the Emacs Info reader,
this issue doesn't exist.

> I'm also not sure if there's some texinfo URI that could be used to
> share documentation pointers.

Another Emacs command automatically produces a pointer to the current
node in an Info manuals in the above format.  So if I want to share a
pointer with you, I invoke that command, paste the result into an
email message, and you on your end invoke that other command.  Problem
solved.


Re: Benefits of using Sphinx documentation format

2021-07-13 Thread Eli Zaretskii via Gcc
> From: Richard Biener 
> Date: Tue, 13 Jul 2021 08:24:17 +0200
> Cc: Eli Zaretskii , "gcc@gcc.gnu.org" 
> 
> I actually like texinfo (well, because I know it somewhat, compare to sphinx).
> I think it produces quite decent PDF manuals.  I never use the html
> output (in fact I read our manual using grep & vim in the original
> .texi form ...).

FTR, I almost exclusively use the (Emacs) Info reader to read the
manuals in Info format.  I never understood those who prefer reading
HTML-formatted docs in a Web browser.  The advanced features of Info:
the index-search with powerful completion built-in, seamless
cross-references between manuals, the ability to search all of the
manuals installed on my system and then browse the results, the
ability to have Emacs land me at the documentation of the symbol under
the cursor regardless of its language/package/library, no dependency
on connectivity, to mention just a few -- all those are tremendous
productivity boosters.  I rarely spend more than a few seconds to find
the piece of documentation I need (not including reading it, of
course).  (And yes, grep-style regexp search through the entire manual
is also available, although I only need to use it in rare and
exceptional circumstances.)

So I never understood people, let alone developers, who are willing to
throw such power out the window and use HTML.  I only do that when
there's a manual I don't have installed in the Info format (a rare
phenomenon) or some other similarly exceptional cases.  But I get it
that there are strange people who prefer HTML nonetheless.  More
importantly, the Texinfo developers understand that, and actively work
towards making the Texinfo HTML better, with some impressive progress
already there, see the latest release 6.8 of Texinfo (Gavin mentioned
some of the advances).


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Mon, 12 Jul 2021 18:15:26 +0100
> Cc: Matthias Kretz , "gcc@gcc.gnu.org" 
> 
> > Gavin Smith, the GNU Texinfo maintainer, responded in detail to that
> > list.  However, his message didn't get through to the list, for some
> > reason.
> 
> It did:
> https://gcc.gnu.org/pipermail/gcc/2021-July/236744.html
> https://gcc.gnu.org/pipermail/gcc-patches/2021-July/574987.html

That's not the message I was talking about.  Gavin sent another, which
didn't get posted.


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc-patches
> From: Jonathan Wakely 
> Date: Mon, 12 Jul 2021 15:54:49 +0100
> Cc: Martin Liška , 
>   "g...@gcc.gnu.org" , gcc-patches 
> , 
>   "Joseph S. Myers" 
> 
> You like texinfo. We get it.

Would you please drop the attitude?


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Mon, 12 Jul 2021 15:54:49 +0100
> Cc: Martin Liška , 
>   "gcc@gcc.gnu.org" , gcc-patches 
> , 
>   "Joseph S. Myers" 
> 
> You like texinfo. We get it.

Would you please drop the attitude?


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc
> Cc: h...@bitrange.com, gcc@gcc.gnu.org, gcc-patc...@gcc.gnu.org,
>  jos...@codesourcery.com
> From: Martin Liška 
> Date: Mon, 12 Jul 2021 16:37:00 +0200
> 
> >   4) The need to learn yet another markup language.
> >  While this is not a problem for simple text, it does require a
> >  serious study of RST and Sphinx to use the more advanced features.
> 
> No, majority of the documentation is pretty simple: basic formatting, links, 
> tables and
> code examples.

We also have documentation of APIs (a.k.a. "functions").  I actually
tried to find in the Sphinx docs how to do that and got lost.  So, not
really "very simple".


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc-patches
> Cc: h...@bitrange.com, g...@gcc.gnu.org, gcc-patches@gcc.gnu.org,
>  jos...@codesourcery.com
> From: Martin Liška 
> Date: Mon, 12 Jul 2021 16:37:00 +0200
> 
> >   4) The need to learn yet another markup language.
> >  While this is not a problem for simple text, it does require a
> >  serious study of RST and Sphinx to use the more advanced features.
> 
> No, majority of the documentation is pretty simple: basic formatting, links, 
> tables and
> code examples.

We also have documentation of APIs (a.k.a. "functions").  I actually
tried to find in the Sphinx docs how to do that and got lost.  So, not
really "very simple".


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc-patches
> Cc: g...@gcc.gnu.org, gcc-patches@gcc.gnu.org, jos...@codesourcery.com
> From: Martin Liška 
> Date: Mon, 12 Jul 2021 16:34:11 +0200
> 
> > "Texinfo must go" is one possible conclusion from your description.
> > But it isn't the only one.  An alternative is "the Texinfo source of
> > the GCC manual must be improved to fix this problem."  And yes, this
> > problem does have a solution in Texinfo.
> 
> No, the alternative is more powerful output given by Texinfo, in particular
> more modern HTML pages.

Please see the response by Gavin: it sounds like at least some of that
was resolved in Texinfo, sometimes long ago.


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc
> Cc: gcc@gcc.gnu.org, gcc-patc...@gcc.gnu.org, jos...@codesourcery.com
> From: Martin Liška 
> Date: Mon, 12 Jul 2021 16:34:11 +0200
> 
> > "Texinfo must go" is one possible conclusion from your description.
> > But it isn't the only one.  An alternative is "the Texinfo source of
> > the GCC manual must be improved to fix this problem."  And yes, this
> > problem does have a solution in Texinfo.
> 
> No, the alternative is more powerful output given by Texinfo, in particular
> more modern HTML pages.

Please see the response by Gavin: it sounds like at least some of that
was resolved in Texinfo, sometimes long ago.


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc
> From: Matthias Kretz 
> Date: Mon, 12 Jul 2021 16:54:50 +0200
> 
> On Monday, 12 July 2021 16:30:23 CEST Martin Liška wrote:
> > On 7/12/21 4:12 PM, Eli Zaretskii wrote:
> > > I get it that you dislike the HTML produced by Texinfo, but without
> > > some examples of such bad HTML it is impossible to know what exactly
> > > do you dislike and why.
> 
> I believe Martin made a really good list.

Gavin Smith, the GNU Texinfo maintainer, responded in detail to that
list.  However, his message didn't get through to the list, for some
reason.  Can someone please see why, and release his message?  I think
he makes some important points, and his message does deserve being
posted and read as part of this discussion.


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Mon, 12 Jul 2021 15:05:11 +0100
> Cc: Martin Liška , 
>   "gcc@gcc.gnu.org" , gcc-patches 
> , 
>   "Joseph S. Myers" 
> 
> To be clear, I give links to users frequently (several times a week,
> every week, for decades) and prefer to give them a link to specific
> options. Obviously I link to the online HTML docs rather than telling
> them an 'info' command to run, because most people don't use info
> pages or know how to navigate them. That means I can't provide decent
> links, because the actual option name I'm trying to link to is always
> off the top of the page. This is simply unacceptable IMHO. Texinfo
> must go.

"Texinfo must go" is one possible conclusion from your description.
But it isn't the only one.  An alternative is "the Texinfo source of
the GCC manual must be improved to fix this problem."  And yes, this
problem does have a solution in Texinfo.


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc-patches
> From: Jonathan Wakely 
> Date: Mon, 12 Jul 2021 15:05:11 +0100
> Cc: Martin Liška , 
>   "g...@gcc.gnu.org" , gcc-patches 
> , 
>   "Joseph S. Myers" 
> 
> To be clear, I give links to users frequently (several times a week,
> every week, for decades) and prefer to give them a link to specific
> options. Obviously I link to the online HTML docs rather than telling
> them an 'info' command to run, because most people don't use info
> pages or know how to navigate them. That means I can't provide decent
> links, because the actual option name I'm trying to link to is always
> off the top of the page. This is simply unacceptable IMHO. Texinfo
> must go.

"Texinfo must go" is one possible conclusion from your description.
But it isn't the only one.  An alternative is "the Texinfo source of
the GCC manual must be improved to fix this problem."  And yes, this
problem does have a solution in Texinfo.


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc
> From: Jonathan Wakely 
> Date: Mon, 12 Jul 2021 14:53:44 +0100
> Cc: Martin Liška , 
>   "gcc@gcc.gnu.org" , gcc-patches 
> , 
>   "Joseph S. Myers" 
> 
> For me, these items are enough justification to switch away from
> texinfo, which produces crap HTML pages with crap anchors.

If we want to have a serious discussion with useful conclusions, I
suggest to avoid "loaded" terminology.

I get it that you dislike the HTML produced by Texinfo, but without
some examples of such bad HTML it is impossible to know what exactly
do you dislike and why.

> You can't find out the anchors without inspecting (and searching)
> the HTML source. That's utterly stupid.

I don't think I follow: find out the anchors with which means and for
what purposes?

> And even after you do that, the anchor
> is at the wrong place:
> https://gcc.gnu.org/onlinedocs/gcc/Overall-Options.html#index-c

IME, the anchor is where you put it.  If you show me the source of
that HTMl, maybe we can have a more useful discussion of the issue.

> As somebody who spends a lot of time helping users on the mailing
> list, IRC, stackoverflow, and elsewhere, this "feature" of the texinfo
> HTML has angered me for many years.

As somebody who spends a lot of time helping users on every possible
forum, and as someone who has wrote a lot of Texinfo, I don't
understand what angers you.  Please elaborate.

> Yes, some people like texinfo, but some people also dislike it and
> there are serious usability problems with the output. I support
> replacing texinfo with anything that isn't texinfo.

"Anything"?  Even plain text?  I hope not.

See, such "arguments" don't help to have a useful discussion.

> >  4) The need to learn yet another markup language.
> > While this is not a problem for simple text, it does require a
> > serious study of RST and Sphinx to use the more advanced features.
> 
> This is a problem with texinfo too.

Not for someone who already knows Texinfo.  We are talking about
switching away of it, so I'm thinking about people who contributed
patches for the manual in the past.  They already know Texinfo, at
least to some extent, and some of them know it very well.

> >  5) Lack of macros.
> > AFAIK, only simple textual substitution is available, no macros
> > with arguments.
> 
> Is this a problem for GCC docs though?

I don't know.  It could be, even if it isn't now.


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc-patches
> From: Jonathan Wakely 
> Date: Mon, 12 Jul 2021 14:53:44 +0100
> Cc: Martin Liška , 
>   "g...@gcc.gnu.org" , gcc-patches 
> , 
>   "Joseph S. Myers" 
> 
> For me, these items are enough justification to switch away from
> texinfo, which produces crap HTML pages with crap anchors.

If we want to have a serious discussion with useful conclusions, I
suggest to avoid "loaded" terminology.

I get it that you dislike the HTML produced by Texinfo, but without
some examples of such bad HTML it is impossible to know what exactly
do you dislike and why.

> You can't find out the anchors without inspecting (and searching)
> the HTML source. That's utterly stupid.

I don't think I follow: find out the anchors with which means and for
what purposes?

> And even after you do that, the anchor
> is at the wrong place:
> https://gcc.gnu.org/onlinedocs/gcc/Overall-Options.html#index-c

IME, the anchor is where you put it.  If you show me the source of
that HTMl, maybe we can have a more useful discussion of the issue.

> As somebody who spends a lot of time helping users on the mailing
> list, IRC, stackoverflow, and elsewhere, this "feature" of the texinfo
> HTML has angered me for many years.

As somebody who spends a lot of time helping users on every possible
forum, and as someone who has wrote a lot of Texinfo, I don't
understand what angers you.  Please elaborate.

> Yes, some people like texinfo, but some people also dislike it and
> there are serious usability problems with the output. I support
> replacing texinfo with anything that isn't texinfo.

"Anything"?  Even plain text?  I hope not.

See, such "arguments" don't help to have a useful discussion.

> >  4) The need to learn yet another markup language.
> > While this is not a problem for simple text, it does require a
> > serious study of RST and Sphinx to use the more advanced features.
> 
> This is a problem with texinfo too.

Not for someone who already knows Texinfo.  We are talking about
switching away of it, so I'm thinking about people who contributed
patches for the manual in the past.  They already know Texinfo, at
least to some extent, and some of them know it very well.

> >  5) Lack of macros.
> > AFAIK, only simple textual substitution is available, no macros
> > with arguments.
> 
> Is this a problem for GCC docs though?

I don't know.  It could be, even if it isn't now.


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc-patches
> Cc: g...@gcc.gnu.org, gcc-patches@gcc.gnu.org, jos...@codesourcery.com
> From: Martin Liška 
> Date: Mon, 12 Jul 2021 15:25:47 +0200
> 
> Let's make it a separate sub-thread where we can discuss motivation why
> do I want moving to Sphinx format.

Thanks for starting this discussion.

> Benefits:
> 1) modern looking HTML output (before: [1], after: [2]):
> a) syntax highlighting for examples (code, shell commands, etc.)
> b) precise anchors, the current Texinfo anchors are not displayed (start 
> with first line of an option)
> c) one can easily copy a link to an anchor (displayed as ¶)
> d) internal links are working, e.g. one can easily jump from listing of 
> options
> e) left menu navigation provides better orientation in the manual
> f) Sphinx provides internal search capability: [3]
> 2) internal links are also provided in PDF version of the manual

How is this different from Texinfo?

> 3) some existing GCC manuals are already written in Sphinx (GNAT manuals and 
> libgccjit)
> 4) support for various output formats, some people are interested in ePUB 
> format

Texinfo likewise supports many output formats.  Someone presented a
very simple package to produce epub format from it.

> 5) Sphinx is using RST which is quite minimal semantic markup language

Is it more minimal than Texinfo?

> 6) TOC is automatically generated - no need for manual navigation like seen 
> here: [5]

That is not needed in Texinfo as well, since long ago.  Nowadays, you
just say

  @node Whatever

and the rest is done automatically, as long as the manual's structure
is a proper tree (which it normally is, I know of only one manual that
is an exception).

> Disadvantages:
> 
> 1) info pages are currently missing Page description in TOC
> 2) rich formatting is leading to extra wrapping in info output - beings 
> partially addresses in [4]
> 3) one needs e.g. Emacs support for inline links (rendered as notes)

 4) The need to learn yet another markup language.
While this is not a problem for simple text, it does require a
serious study of RST and Sphinx to use the more advanced features.

 5) Lack of macros.
AFAIK, only simple textual substitution is available, no macros
with arguments.


Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Eli Zaretskii via Gcc
> Cc: gcc@gcc.gnu.org, gcc-patc...@gcc.gnu.org, jos...@codesourcery.com
> From: Martin Liška 
> Date: Mon, 12 Jul 2021 15:25:47 +0200
> 
> Let's make it a separate sub-thread where we can discuss motivation why
> do I want moving to Sphinx format.

Thanks for starting this discussion.

> Benefits:
> 1) modern looking HTML output (before: [1], after: [2]):
> a) syntax highlighting for examples (code, shell commands, etc.)
> b) precise anchors, the current Texinfo anchors are not displayed (start 
> with first line of an option)
> c) one can easily copy a link to an anchor (displayed as ¶)
> d) internal links are working, e.g. one can easily jump from listing of 
> options
> e) left menu navigation provides better orientation in the manual
> f) Sphinx provides internal search capability: [3]
> 2) internal links are also provided in PDF version of the manual

How is this different from Texinfo?

> 3) some existing GCC manuals are already written in Sphinx (GNAT manuals and 
> libgccjit)
> 4) support for various output formats, some people are interested in ePUB 
> format

Texinfo likewise supports many output formats.  Someone presented a
very simple package to produce epub format from it.

> 5) Sphinx is using RST which is quite minimal semantic markup language

Is it more minimal than Texinfo?

> 6) TOC is automatically generated - no need for manual navigation like seen 
> here: [5]

That is not needed in Texinfo as well, since long ago.  Nowadays, you
just say

  @node Whatever

and the rest is done automatically, as long as the manual's structure
is a proper tree (which it normally is, I know of only one manual that
is an exception).

> Disadvantages:
> 
> 1) info pages are currently missing Page description in TOC
> 2) rich formatting is leading to extra wrapping in info output - beings 
> partially addresses in [4]
> 3) one needs e.g. Emacs support for inline links (rendered as notes)

 4) The need to learn yet another markup language.
While this is not a problem for simple text, it does require a
serious study of RST and Sphinx to use the more advanced features.

 5) Lack of macros.
AFAIK, only simple textual substitution is available, no macros
with arguments.


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-05 Thread Eli Zaretskii via Gcc
> From: Richard Sandiford 
> Cc: Eli Zaretskii ,  gcc@gcc.gnu.org,  gcc-patc...@gcc.gnu.org, 
>  jos...@codesourcery.com
> Date: Mon, 05 Jul 2021 10:17:38 +0100
> 
> Hans-Peter Nilsson  writes:
> > I've read the discussion downthread, but I seem to miss (a recap
> > of) the benefits of moving to Sphinx.  Maybe other have too and
> > it'd be a good idea to repeat them?  Otherwise, the impression
> > is not so good, as all I see is bits here and there getting lost
> > in translation.
> 
> Better cross-referencing is one big feature.

See below: the Info format has some features in addition to
cross-references that can make this a much smaller issue.  HTML has
just the cross-references, so "when you are a hammer, every problem
looks like a nail".

> IMO this subthread has demonstrated why the limitations of info
> formatting have held back the amount of cross-referencing in the
> online html.

I disagree with this conclusion, see below.

> (And based on emperical evidence, I get the impression that far more
> people use the online html docs than the info docs.)

HTML browsers currently lack some features that make Info the format
of choice for me when I need to use the documentation efficiently.
The most important feature I miss in HTML browsers is the index
search.  A good manual usually has extensive index (or indices) which
make it very easy to find a specific topic one is looking for,
i.e. use the manual as a reference (as opposed as a first-time
reading, when you read large portions of the manual in sequence).

Another important feature is regexp search across multiple sections
(with HTML you'd be forced to download the manual as a single large
file for that, and then you'll probably miss regexps).

Yet another feature which, when needed, is something to kill for, is
the "info apropos" command, which can search all the manuals on your
system and build a menu from the matching sections found in different
manuals.  And there are a few more.

(Texinfo folks are working on JavaScript code to add some missing
capabilities to Web browsers, but that effort is not yet complete.)

> E.g. quoting from Richard's recent patch:
> 
>   @item -fmove-loop-stores
>   @opindex fmove-loop-stores
>   Enables the loop store motion pass in the GIMPLE loop optimizer.  This
>   moves invariant stores to after the end of the loop in exchange for
>   carrying the stored value in a register across the iteration.
>   Note for this option to have an effect @option{-ftree-loop-im} has to 
>   be enabled as well.  Enabled at level @option{-O1} and higher, except 
>   for @option{-Og}.
> 
> In the online docs, this will just be plain text.  Anyone who doesn't
> know what -ftree-loop-im is will have to search for it manually.

First, even if there are no cross-references, manual search is not the
best way.  It is much easier to use index-search:

  i ftree TAB

will display a list of options that you could be after, and you can
simply choose from the list, or type a bit more until you have a
single match.

Moreover, adding cross-references is easy:

  @item -fmove-loop-stores
  @opindex fmove-loop-stores
  Enables the loop store motion pass in the GIMPLE loop optimizer.  This
  moves invariant stores to after the end of the loop in exchange for
  carrying the stored value in a register across the iteration.
  Note for this option to have an effect @option{-ftree-loop-im}
  (@pxref{Optimize Options, -ftree-loop-im}) 
  ^^
  has be enabled as well.  Enabled at level @option{-O1} and higher,
  except for @option{-Og}.

If this looks like too much work, a simple Texinfo macro (two, if you
want an anchor where you point) will do.

> Adding the extra references to the html (and pdf) output but dropping
> them from the info sounds like a good compromise.

But that's not what happens.  And besides, how would you decide which
cross-references to drop and which to retain in Info?


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-05 Thread Eli Zaretskii via Gcc-patches
> From: Richard Sandiford 
> Cc: Eli Zaretskii ,  g...@gcc.gnu.org,  
> gcc-patches@gcc.gnu.org,  jos...@codesourcery.com
> Date: Mon, 05 Jul 2021 10:17:38 +0100
> 
> Hans-Peter Nilsson  writes:
> > I've read the discussion downthread, but I seem to miss (a recap
> > of) the benefits of moving to Sphinx.  Maybe other have too and
> > it'd be a good idea to repeat them?  Otherwise, the impression
> > is not so good, as all I see is bits here and there getting lost
> > in translation.
> 
> Better cross-referencing is one big feature.

See below: the Info format has some features in addition to
cross-references that can make this a much smaller issue.  HTML has
just the cross-references, so "when you are a hammer, every problem
looks like a nail".

> IMO this subthread has demonstrated why the limitations of info
> formatting have held back the amount of cross-referencing in the
> online html.

I disagree with this conclusion, see below.

> (And based on emperical evidence, I get the impression that far more
> people use the online html docs than the info docs.)

HTML browsers currently lack some features that make Info the format
of choice for me when I need to use the documentation efficiently.
The most important feature I miss in HTML browsers is the index
search.  A good manual usually has extensive index (or indices) which
make it very easy to find a specific topic one is looking for,
i.e. use the manual as a reference (as opposed as a first-time
reading, when you read large portions of the manual in sequence).

Another important feature is regexp search across multiple sections
(with HTML you'd be forced to download the manual as a single large
file for that, and then you'll probably miss regexps).

Yet another feature which, when needed, is something to kill for, is
the "info apropos" command, which can search all the manuals on your
system and build a menu from the matching sections found in different
manuals.  And there are a few more.

(Texinfo folks are working on JavaScript code to add some missing
capabilities to Web browsers, but that effort is not yet complete.)

> E.g. quoting from Richard's recent patch:
> 
>   @item -fmove-loop-stores
>   @opindex fmove-loop-stores
>   Enables the loop store motion pass in the GIMPLE loop optimizer.  This
>   moves invariant stores to after the end of the loop in exchange for
>   carrying the stored value in a register across the iteration.
>   Note for this option to have an effect @option{-ftree-loop-im} has to 
>   be enabled as well.  Enabled at level @option{-O1} and higher, except 
>   for @option{-Og}.
> 
> In the online docs, this will just be plain text.  Anyone who doesn't
> know what -ftree-loop-im is will have to search for it manually.

First, even if there are no cross-references, manual search is not the
best way.  It is much easier to use index-search:

  i ftree TAB

will display a list of options that you could be after, and you can
simply choose from the list, or type a bit more until you have a
single match.

Moreover, adding cross-references is easy:

  @item -fmove-loop-stores
  @opindex fmove-loop-stores
  Enables the loop store motion pass in the GIMPLE loop optimizer.  This
  moves invariant stores to after the end of the loop in exchange for
  carrying the stored value in a register across the iteration.
  Note for this option to have an effect @option{-ftree-loop-im}
  (@pxref{Optimize Options, -ftree-loop-im}) 
  ^^
  has be enabled as well.  Enabled at level @option{-O1} and higher,
  except for @option{-Og}.

If this looks like too much work, a simple Texinfo macro (two, if you
want an anchor where you point) will do.

> Adding the extra references to the html (and pdf) output but dropping
> them from the info sounds like a good compromise.

But that's not what happens.  And besides, how would you decide which
cross-references to drop and which to retain in Info?


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-02 Thread Eli Zaretskii via Gcc-patches
> Cc: Eli Zaretskii , g...@gcc.gnu.org, gcc-patches@gcc.gnu.org,
>  jos...@codesourcery.com
> From: Martin Liška 
> Date: Fri, 2 Jul 2021 11:40:06 +0200
> 
> > It must
> > look sensible without that.  In this case it seems that already the
> > generated .texinfo input to makeinfo is bad, where does the 'e' (or 'f')
> > come from?  The original texinfo file simply contains:
> 
> These are auto-numbered. Theoretically one can use the verbose anchor names:
> 
> @anchor{demo cmdoption-Wshift-overflow3}@anchor{e}@anchor{demo 
> cmdoption-wshift-overflow3}@anchor{f}
> @deffn {Option} @w{-}Wshift@w{-}overflow3=n, @w{-}Wshift@w{-}overflow3
> 
> Default option value for @ref{e,,-Wshift-overflow3}.
> 
> But these would lead to even longer '*note -Wshift-overflow3: demo 
> cmdoption-wshift-overflow3' output.

While auto-numbering is a nice feature, the human-readable anchors
have an advantage of hinting on the topic to which the cross-reference
points.


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-02 Thread Eli Zaretskii via Gcc
> Cc: Eli Zaretskii , gcc@gcc.gnu.org, gcc-patc...@gcc.gnu.org,
>  jos...@codesourcery.com
> From: Martin Liška 
> Date: Fri, 2 Jul 2021 11:40:06 +0200
> 
> > It must
> > look sensible without that.  In this case it seems that already the
> > generated .texinfo input to makeinfo is bad, where does the 'e' (or 'f')
> > come from?  The original texinfo file simply contains:
> 
> These are auto-numbered. Theoretically one can use the verbose anchor names:
> 
> @anchor{demo cmdoption-Wshift-overflow3}@anchor{e}@anchor{demo 
> cmdoption-wshift-overflow3}@anchor{f}
> @deffn {Option} @w{-}Wshift@w{-}overflow3=n, @w{-}Wshift@w{-}overflow3
> 
> Default option value for @ref{e,,-Wshift-overflow3}.
> 
> But these would lead to even longer '*note -Wshift-overflow3: demo 
> cmdoption-wshift-overflow3' output.

While auto-numbering is a nice feature, the human-readable anchors
have an advantage of hinting on the topic to which the cross-reference
points.


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-02 Thread Eli Zaretskii via Gcc-patches
> Cc: jos...@codesourcery.com, g...@gcc.gnu.org, gcc-patches@gcc.gnu.org
> From: Martin Liška 
> Date: Fri, 2 Jul 2021 11:30:02 +0200
> 
> > So the purpose of having the comma there is to avoid having a period
> > in the middle of a sentence, which is added by makeinfo (because the
> > Info readers need that).  Having a comma there may seem a bit
> > redundant, but having a period will definitely look like a typo, if
> > not confuse the heck out of the reader, especially if you want to use
> > these inline cross-references so massively.
> 
> Well, then it's a bug in Makeinfo.

No, it isn't a bug in makeinfo.  It's a (mis)feature of the Info
format: a cross-reference needs to have a punctuation character after
it, so that Info readers would know where's the end of the node/anchor
name to which the cross-reference points.  Info files are largely
plain-ASCII files, so the Info readers need help in this case, and
makeinfo produces what they need.

> >> What type of conversions and style are going to change with conversion to 
> >> Sphinx?
> > 
> > Anything that is different from the style conventions described in the
> > Texinfo manual.  We have many such conventions.
> 
> Which is supposed to be here:
> https://www.gnu.org/prep/standards/html_node/Documentation.html#Documentation
> right?
> 
> I've just the text. About the shortening of section names in a TOC. I 
> couldn't find
> it in the GNU Documentation manual.

No, there's also a lot of style guidelines in the Texinfo manual
itself.  Basically, the documentation of almost every Texinfo
directive includes some style guidelines, and there are also sections
which are pure guidelines, like the nodes "Conventions", "Node Names",
"Structuring Command Types", and some others.

> >> Again, please show up concrete examples. What you describe is very 
> >> theoretical.
> > 
> > We've already seen one: the style of writing inline cross-references
> > with the equivalent of @ref.  We also saw another: the way you
> > converted the menus.  It is quite clear to me that there will be
> > others.  So I'm not sure why you need more evidence that this could be
> > a real issue.
> 
> As explained, @ref are generated by Makeinfo in a strange way.
> About the menus, I was unable to find it..

See the node "Menu Parts" in the Texinfo manual.  If you look at other
GNU manuals, you will see that it is a de-facto standard to provide
most menu items with short descriptions.

> > But maybe all of this is intentional: maybe the GCC project
> > consciously and deliberately decided to move away of the GNU
> > documentation style and conventions, and replace them with whatever
> > the Sphinx and RST conventions are?  In that case, there's no reason
> > for me to even mention these aspects.
> 
> My intention is preserving status quo as much as possible.

Well, but you definitely deviated from the status quo, and it sounds
like you did that deliberately, without any discussion.

> On the other hand, Sphinx provides quite some nice features why I wanted to 
> use it.

Which features are those?


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-02 Thread Eli Zaretskii via Gcc
> Cc: jos...@codesourcery.com, gcc@gcc.gnu.org, gcc-patc...@gcc.gnu.org
> From: Martin Liška 
> Date: Fri, 2 Jul 2021 11:30:02 +0200
> 
> > So the purpose of having the comma there is to avoid having a period
> > in the middle of a sentence, which is added by makeinfo (because the
> > Info readers need that).  Having a comma there may seem a bit
> > redundant, but having a period will definitely look like a typo, if
> > not confuse the heck out of the reader, especially if you want to use
> > these inline cross-references so massively.
> 
> Well, then it's a bug in Makeinfo.

No, it isn't a bug in makeinfo.  It's a (mis)feature of the Info
format: a cross-reference needs to have a punctuation character after
it, so that Info readers would know where's the end of the node/anchor
name to which the cross-reference points.  Info files are largely
plain-ASCII files, so the Info readers need help in this case, and
makeinfo produces what they need.

> >> What type of conversions and style are going to change with conversion to 
> >> Sphinx?
> > 
> > Anything that is different from the style conventions described in the
> > Texinfo manual.  We have many such conventions.
> 
> Which is supposed to be here:
> https://www.gnu.org/prep/standards/html_node/Documentation.html#Documentation
> right?
> 
> I've just the text. About the shortening of section names in a TOC. I 
> couldn't find
> it in the GNU Documentation manual.

No, there's also a lot of style guidelines in the Texinfo manual
itself.  Basically, the documentation of almost every Texinfo
directive includes some style guidelines, and there are also sections
which are pure guidelines, like the nodes "Conventions", "Node Names",
"Structuring Command Types", and some others.

> >> Again, please show up concrete examples. What you describe is very 
> >> theoretical.
> > 
> > We've already seen one: the style of writing inline cross-references
> > with the equivalent of @ref.  We also saw another: the way you
> > converted the menus.  It is quite clear to me that there will be
> > others.  So I'm not sure why you need more evidence that this could be
> > a real issue.
> 
> As explained, @ref are generated by Makeinfo in a strange way.
> About the menus, I was unable to find it..

See the node "Menu Parts" in the Texinfo manual.  If you look at other
GNU manuals, you will see that it is a de-facto standard to provide
most menu items with short descriptions.

> > But maybe all of this is intentional: maybe the GCC project
> > consciously and deliberately decided to move away of the GNU
> > documentation style and conventions, and replace them with whatever
> > the Sphinx and RST conventions are?  In that case, there's no reason
> > for me to even mention these aspects.
> 
> My intention is preserving status quo as much as possible.

Well, but you definitely deviated from the status quo, and it sounds
like you did that deliberately, without any discussion.

> On the other hand, Sphinx provides quite some nice features why I wanted to 
> use it.

Which features are those?


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-01 Thread Eli Zaretskii via Gcc
> Cc: jos...@codesourcery.com, gcc@gcc.gnu.org, gcc-patc...@gcc.gnu.org
> From: Martin Liška 
> Date: Thu, 1 Jul 2021 18:04:24 +0200
> 
> > Emacs doesn't hide the period.  But there shouldn't be a period to
> > begin with, since it's the middle of a sentence.  The correct way of
> > writing this in Texinfo is to have some punctuation: a comma or a
> > semi-colon, after the closing brace, like this:
> > 
> >This is the warning level of @ref{e,,-Wshift-overflow3}, and …
> 
> I don't see why we should put a comma after an option reference.

You explained it yourself later on:

> It's all related to Texinfo. Sphinx generates e.g.
> Enabled by @ref{7,,-Wall} and something else.
> 
> as documented here:
> https://www.gnu.org/software/texinfo/manual/texinfo/html_node/_0040ref.html
> 
> Then it ends with the following info output:
> 
>   Enabled by *note -Wall: 7. and something else.
> 
> So the period is added by Texinfo. If I put comma after a reference, then
> the period is not added there.
  ^

So the purpose of having the comma there is to avoid having a period
in the middle of a sentence, which is added by makeinfo (because the
Info readers need that).  Having a comma there may seem a bit
redundant, but having a period will definitely look like a typo, if
not confuse the heck out of the reader, especially if you want to use
these inline cross-references so massively.

> > I don't think the GCC manuals should necessarily be bound by the
> > Sphinx standards.  Where those standards are sub-optimal, it is
> > perfectly okay for GCC (and other projects) to deviate.  GCC and other
> > GNU manuals used a certain style and convention for decades, so
> > there's more than enough experience and tradition to build on.
> 
> What type of conversions and style are going to change with conversion to 
> Sphinx?

Anything that is different from the style conventions described in the
Texinfo manual.  We have many such conventions.

> Do you see any of them worse than what we have now?

I didn't bother reading the Sphinx guidelines yet, and don't know when
(and if) I will have time for that.  I do think the comparison should
be part of the job or moving to Sphinx.

> > I will no longer pursue this point, but let me just say that I
> > consider it a mistake to throw away all the experience collected using
> > Texinfo just because Sphinx folks have other traditions and
> > conventions.  It might be throwing the baby with the bathwater.
> > 
> 
> Again, please show up concrete examples. What you describe is very 
> theoretical.

We've already seen one: the style of writing inline cross-references
with the equivalent of @ref.  We also saw another: the way you
converted the menus.  It is quite clear to me that there will be
others.  So I'm not sure why you need more evidence that this could be
a real issue.

But maybe all of this is intentional: maybe the GCC project
consciously and deliberately decided to move away of the GNU
documentation style and conventions, and replace them with whatever
the Sphinx and RST conventions are?  In that case, there's no reason
for me to even mention these aspects.


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-01 Thread Eli Zaretskii via Gcc-patches
> Cc: jos...@codesourcery.com, g...@gcc.gnu.org, gcc-patches@gcc.gnu.org
> From: Martin Liška 
> Date: Thu, 1 Jul 2021 18:04:24 +0200
> 
> > Emacs doesn't hide the period.  But there shouldn't be a period to
> > begin with, since it's the middle of a sentence.  The correct way of
> > writing this in Texinfo is to have some punctuation: a comma or a
> > semi-colon, after the closing brace, like this:
> > 
> >This is the warning level of @ref{e,,-Wshift-overflow3}, and …
> 
> I don't see why we should put a comma after an option reference.

You explained it yourself later on:

> It's all related to Texinfo. Sphinx generates e.g.
> Enabled by @ref{7,,-Wall} and something else.
> 
> as documented here:
> https://www.gnu.org/software/texinfo/manual/texinfo/html_node/_0040ref.html
> 
> Then it ends with the following info output:
> 
>   Enabled by *note -Wall: 7. and something else.
> 
> So the period is added by Texinfo. If I put comma after a reference, then
> the period is not added there.
  ^

So the purpose of having the comma there is to avoid having a period
in the middle of a sentence, which is added by makeinfo (because the
Info readers need that).  Having a comma there may seem a bit
redundant, but having a period will definitely look like a typo, if
not confuse the heck out of the reader, especially if you want to use
these inline cross-references so massively.

> > I don't think the GCC manuals should necessarily be bound by the
> > Sphinx standards.  Where those standards are sub-optimal, it is
> > perfectly okay for GCC (and other projects) to deviate.  GCC and other
> > GNU manuals used a certain style and convention for decades, so
> > there's more than enough experience and tradition to build on.
> 
> What type of conversions and style are going to change with conversion to 
> Sphinx?

Anything that is different from the style conventions described in the
Texinfo manual.  We have many such conventions.

> Do you see any of them worse than what we have now?

I didn't bother reading the Sphinx guidelines yet, and don't know when
(and if) I will have time for that.  I do think the comparison should
be part of the job or moving to Sphinx.

> > I will no longer pursue this point, but let me just say that I
> > consider it a mistake to throw away all the experience collected using
> > Texinfo just because Sphinx folks have other traditions and
> > conventions.  It might be throwing the baby with the bathwater.
> > 
> 
> Again, please show up concrete examples. What you describe is very 
> theoretical.

We've already seen one: the style of writing inline cross-references
with the equivalent of @ref.  We also saw another: the way you
converted the menus.  It is quite clear to me that there will be
others.  So I'm not sure why you need more evidence that this could be
a real issue.

But maybe all of this is intentional: maybe the GCC project
consciously and deliberately decided to move away of the GNU
documentation style and conventions, and replace them with whatever
the Sphinx and RST conventions are?  In that case, there's no reason
for me to even mention these aspects.


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-01 Thread Eli Zaretskii via Gcc
> Cc: jos...@codesourcery.com, gcc@gcc.gnu.org, gcc-patc...@gcc.gnu.org
> From: Martin Liška 
> Date: Thu, 1 Jul 2021 16:14:30 +0200
> 
> >> If I understand the notes correct, the '.' should be also hidden by e.g. 
> >> Emacs.
> > 
> > No, it doesn't.  The actual text in the Info file is:
> > 
> > *note -std: f.‘=iso9899:1990’
> > 
> > and the period after " f" isn't hidden.  Where does that "f" come from
> > and what is its purpose here? can it be removed (together with the
> > period)?
> 
> It's name of the anchor used for the @ref. The names are automatically 
> generated
> by makeinfo. So there's an example:
> 
> This is the warning level of @ref{e,,-Wshift-overflow3} and …
> 
> becomes in info:
> This is the warning level of *note -Wshift-overflow3: e. and …
> 
> I can ask the question at Sphinx, the Emacs script should hide that.

Emacs doesn't hide the period.  But there shouldn't be a period to
begin with, since it's the middle of a sentence.  The correct way of
writing this in Texinfo is to have some punctuation: a comma or a
semi-colon, after the closing brace, like this:

  This is the warning level of @ref{e,,-Wshift-overflow3}, and …

Does Sphinx somehow generate the period if there's no comma, or does
it do it unconditionally, i.e. even if there is a punctuation after
the closing brace?

> > This actually raises a more general issue with this Sphinx porting
> > initiative: what will be the canonical style guide for maintaining the
> > GCC manual in Sphinx, or more generally for writing GNU manuals in
> > Sphinx?  For Texinfo, we have the Texinfo manual, which both documents
> > the language and provides style guidelines for how to use Texinfo for
> > producing good manuals.  Contributors to GNU manuals are using those
> > guidelines for many years.  Is there, or will there be, an equivalent
> > style guide for Sphinx?  If not, how will the future contributors to
> > the GCC manuals know what are the writing and style conventions?
> 
> No, I'm not planning any extra style guide. We will you standard Sphinx RST
> manual and one can find many tutorials about how to do it.

Are you sure everything there is good for our manuals?  Did you
compare the style conventions there with what we have in the Texinfo
manual?

Moreover, this means people who contribute to other manuals will now
have to learn two different styles, no?  And that's in addition to
learning one more language.

> > That is why I recommended to discuss this on the Texinfo list: that's
> > the place where such guidelines are discussed, and where we have
> > experts who understand the effects and consequences of using this or
> > that style.  The current style in GNU manuals is to have the menus as
> > we see them in the existing GCC manuals: with a short description.
> > Maybe there are good reasons to deviate from that style, but
> > shouldn't this be at least presented and discussed, before the
> > decision is made?  GCC developers are not the only ones who will be
> > reading the future GCC manuals.
> > 
> 
> That seems to me a subtle adjustment and it's standard way how people generate
> TOC in Sphinx. See e.g. the Linux kernel documentation:
> https://www.kernel.org/doc/html/latest/

I don't think the GCC manuals should necessarily be bound by the
Sphinx standards.  Where those standards are sub-optimal, it is
perfectly okay for GCC (and other projects) to deviate.  GCC and other
GNU manuals used a certain style and convention for decades, so
there's more than enough experience and tradition to build on.

I will no longer pursue this point, but let me just say that I
consider it a mistake to throw away all the experience collected using
Texinfo just because Sphinx folks have other traditions and
conventions.  It might be throwing the baby with the bathwater.


Re: [PATCH] Port GCC documentation to Sphinx

2021-07-01 Thread Eli Zaretskii via Gcc-patches
> Cc: jos...@codesourcery.com, g...@gcc.gnu.org, gcc-patches@gcc.gnu.org
> From: Martin Liška 
> Date: Thu, 1 Jul 2021 16:14:30 +0200
> 
> >> If I understand the notes correct, the '.' should be also hidden by e.g. 
> >> Emacs.
> > 
> > No, it doesn't.  The actual text in the Info file is:
> > 
> > *note -std: f.‘=iso9899:1990’
> > 
> > and the period after " f" isn't hidden.  Where does that "f" come from
> > and what is its purpose here? can it be removed (together with the
> > period)?
> 
> It's name of the anchor used for the @ref. The names are automatically 
> generated
> by makeinfo. So there's an example:
> 
> This is the warning level of @ref{e,,-Wshift-overflow3} and …
> 
> becomes in info:
> This is the warning level of *note -Wshift-overflow3: e. and …
> 
> I can ask the question at Sphinx, the Emacs script should hide that.

Emacs doesn't hide the period.  But there shouldn't be a period to
begin with, since it's the middle of a sentence.  The correct way of
writing this in Texinfo is to have some punctuation: a comma or a
semi-colon, after the closing brace, like this:

  This is the warning level of @ref{e,,-Wshift-overflow3}, and …

Does Sphinx somehow generate the period if there's no comma, or does
it do it unconditionally, i.e. even if there is a punctuation after
the closing brace?

> > This actually raises a more general issue with this Sphinx porting
> > initiative: what will be the canonical style guide for maintaining the
> > GCC manual in Sphinx, or more generally for writing GNU manuals in
> > Sphinx?  For Texinfo, we have the Texinfo manual, which both documents
> > the language and provides style guidelines for how to use Texinfo for
> > producing good manuals.  Contributors to GNU manuals are using those
> > guidelines for many years.  Is there, or will there be, an equivalent
> > style guide for Sphinx?  If not, how will the future contributors to
> > the GCC manuals know what are the writing and style conventions?
> 
> No, I'm not planning any extra style guide. We will you standard Sphinx RST
> manual and one can find many tutorials about how to do it.

Are you sure everything there is good for our manuals?  Did you
compare the style conventions there with what we have in the Texinfo
manual?

Moreover, this means people who contribute to other manuals will now
have to learn two different styles, no?  And that's in addition to
learning one more language.

> > That is why I recommended to discuss this on the Texinfo list: that's
> > the place where such guidelines are discussed, and where we have
> > experts who understand the effects and consequences of using this or
> > that style.  The current style in GNU manuals is to have the menus as
> > we see them in the existing GCC manuals: with a short description.
> > Maybe there are good reasons to deviate from that style, but
> > shouldn't this be at least presented and discussed, before the
> > decision is made?  GCC developers are not the only ones who will be
> > reading the future GCC manuals.
> > 
> 
> That seems to me a subtle adjustment and it's standard way how people generate
> TOC in Sphinx. See e.g. the Linux kernel documentation:
> https://www.kernel.org/doc/html/latest/

I don't think the GCC manuals should necessarily be bound by the
Sphinx standards.  Where those standards are sub-optimal, it is
perfectly okay for GCC (and other projects) to deviate.  GCC and other
GNU manuals used a certain style and convention for decades, so
there's more than enough experience and tradition to build on.

I will no longer pursue this point, but let me just say that I
consider it a mistake to throw away all the experience collected using
Texinfo just because Sphinx folks have other traditions and
conventions.  It might be throwing the baby with the bathwater.


  1   2   >