Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)
On 09/24/2017 12:53 PM, Shlomi Fish wrote: > > I see. Well, the general wisdom is that a program should not ever segfault, > but > instead gracefully handle the error and exit. This is possible by installing a SIGSEGV handler that is able to gracefully exit the program when stack overflow is detected (although such a handler is EXTREMELY limited in what it is able to safely do); in fact, the GNU libsigsegv library helps in this task, and is used by some other applications (such as GNU m4 and GNU awk) that also can cause infinite recursion on poor user input. However, Chet is not obligated to use it (even though the idea has been mentioned on the list before). > Perhaps implement a maximal > recursion depth like zsh does. Bash does, in the form of FUNCNEST, but you have to opt into it, as otherwise it would be an arbitrary limit, and arbitrary limits go against the GNU coding standards. By the way, it is in general IMPOSSIBLE to write bash so that it can handle ALL possible bad user scripts and still remain responsive to further input. Note that in my description of handling SIGSEGV above that I mention that it is only safe to gracefully turn what would otherwise be the default core dump into a useful error message - but bash STILL has to exit at that point, because you cannot guarantee what other resources (including malloc locks) might still be on the stack, where a longjmp back out to the main parsing loop may cause future deadlock if you do anything unsafe. If you think you can make bash gracefully handle ALL possible bad inputs WITHOUT exiting or going into an infloop itself, then you are claiming that you have solved the Halting Problem, which any good computer scientist already knows has been proven to be undecidable. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org signature.asc Description: OpenPGP digital signature
Re: Wrong AC_TRY_COMPILE idiom
On 9/25/17 1:14 PM, Christian Weisgerber wrote: > I'm forwarding this bug report by Robert Nagy , > which also concerns bash 4.4: > > > > Unbreak autoconf checks with clang by not using nested functions > in the checks. Thanks for the report. This has clearly not been too serious a problem, since the code in question has been in place since October, 2002. I'll fix it for the next bash release. Chet -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
Wrong AC_TRY_COMPILE idiom
I'm forwarding this bug report by Robert Nagy , which also concerns bash 4.4: > Unbreak autoconf checks with clang by not using nested functions in the checks. Someone clearly did not read the autoconf documentation because using the following functions with a function declaration inside the body will end up declaring a function inside a function. - AC_TRY_COMPILE( [], [ int main() { return 0; } ], - AC_LANG_PROGRAM([[]], [[int main (void) { return 0; }]])], - AC_TRY_LINK([], [int main (void) { return 0; }], Result: int main () { int main (void) { return 0; } ; return 0; } nested functions is a gcc extension which is not supported by clang. test.c:4:17: error: function definition is not allowed here int main (void) { return 0; } ^ 1 error generated. This causes tests to fail in the configure scripts resulting in missing compile and link time flags from the builds. This resulted in weird behaviour of several software, like gnome hanging completely due to gtk+3 not being built properly. This change intrudces the following fixes: - remove int main() declaration from AC_TRY_COMPILE, AC_LANG_PROGRAM, AC_TRY_LINK as it comes with a declaration already, and people misused them - change to use AC_LANG_SOURCE when needed in case a complete source block is specified < Here's the trivial patch for bash 4.4: --- configure.ac.orig Wed Sep 7 22:56:28 2016 +++ configure.acMon Sep 25 19:03:03 2017 @@ -808,7 +808,7 @@ AC_CACHE_VAL(bash_cv_strtold_broken, [AC_TRY_COMPILE( [#include ], - [int main() { long double r; char *foo, bar; r = strtold(foo, &bar);}], + [long double r; char *foo, bar; r = strtold(foo, &bar);], bash_cv_strtold_broken=no, bash_cv_strtold_broken=yes, [AC_MSG_WARN(cannot check for broken strtold if cross-compiling, defaulting to no)]) ] -- Christian "naddy" Weisgerber na...@mips.inka.de
Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)
On Sun, Sep 24, 2017 at 08:53:46PM +0300, Shlomi Fish wrote: > I see. Well, the general wisdom is that a program should not ever segfault, > but > instead gracefully handle the error and exit. This only applies to applications, not to tools that let YOU write applications. I can write a trivial C program that gcc will compile into a program that segfaults. That doesn't mean gcc has a bug. It means my C program has a bug. Likewise, if you write a shell script that causes a shell to recurse infinitely and exceed its available stack space, the bug is in your script, not in the shell that faithfully tried to run it. (See also Chet's two replies pointing to FUNCNEST.)