Re: [PATCH] printf: add %#s alias to %b
On 07/09/2023 19:46, Eric Blake wrote: On Thu, Sep 07, 2023 at 11:53:54PM +0700, Robert Elz wrote: And for those who have been following this issue, the new text for the forthcoming POSIX version has removed any mention of obsoleting %b from printf(1) - instead it will simply note that there is will be a difference between printf(1) and printf(3) once the latter gets its version of %b specified (in C23, and in POSIX, in the next major version that follows the coming one, almost certainly) - and to encourage implementors to consider possible solutions. I've considered, and I don't see a problem needing solving, so I'm intending to do precisely nothing, unless someone actually finds a need for binary output from printf(1), which seems unlikely to ever happen to me (I mean a real need, not just to be the same as printf(3) "just because"). So, we can all go back to sleep now - and Chet, I'd undo $#s before it appears in a release, there's no need, and having it might eventually just cause more backward compat issues. Indeed, at this point, even though I proposed a patch for %#s in coreutils, I'm inclined to NOT apply it there. Agreed. The ksh extension of %..2d to output in binary does sound worth replicating; I wonder if glibc would consider putting that in their printf(3); and I could see adding it to Coreutils (whether or not bash adds it - because ksh already has it). This does seem useful as there is prior art, and it's quite general. FWIW there is related functionality in coreutils currently through `basenc --base2msbf` cheers, Pádraig
support for resetting signals to the _system_ default
Currently `trap` has no way to set a signal to the system default. Rather `trap - PIPE` for example sets the signal handling to the value it have upon entrance to the shell. It's common enough for systems to disable SIGPIPE, so it would be useful to support a shell wrapper that would renable standard SIGPIPE handling. I'm not sure what syntax to use for this, maybe: `trap + PIPE` Note we've recently added the --default-signal=PIPE option to env(1) to support this functionality, though it would be useful and symmetrical to have this supported by the shell also. thanks, Pádraig.
Re: which paradigms does bash support
On 14/03/18 00:22, Pierre Gaston wrote: > On Mon, Jan 26, 2015 at 6:05 PM, Pádraig Brady wrote: > >> On 26/01/15 13:43, Greg Wooledge wrote: >>> On Sun, Jan 25, 2015 at 08:11:41PM -0800, garegi...@gmail.com wrote: >>>> As a programming language which paradigms does bash support. >> Declarative, procedural, imperative? >>> >>> This belongs on help-b...@gnu.org so I'm Cc'ing that address. >>> >>> Shell scripts are procedural. >> >> It should be noted that shell programming is closely related to functional >> programming. >> I.E. functional programming maintains no external state and provides >> data flow synchronisation in the language. This maps closely to the >> UNIX filter idea; data flows in and out, with no side affects to the >> system. >> >> By trying to use filters and pipes instead of procedural shell statements, >> you get the advantage of using compiled code, and implicit multicore >> support etc. >> >> cheers, >> Pádraig. >> > > Though I understand what you say and maybe you can see pipes as something > functional(ish), > I believe this is a misleading statement as imo shell scripting is not even > close to be functional in any kind of way. Well my point was, filters and pipes are similar to functional programming. IMHO shell scripts are generally better when using these concepts when possible, rather than using procedural shell statements. See also http://okmij.org/ftp/Computation/monadic-shell.html cheers, Pádraig
Re: Worth mentioning in documentation
On 10/08/15 05:55, Eric Blake wrote: > On 08/10/2015 02:18 AM, Juanma wrote: > >> Here is another point I find confusing: I thought a "shell builtin" didn't >> have a separate binary executable file, like 'cd' (which cd => fail), > > Actually, POSIX requires that there be a separate 'cd' binary, although > it does not have to behave the same as the shell builtin. (About all an > exec'able cd can do is tell you by exit status whether the builtin cd > would succeed or fail; or be used for its CDPATH side-effect of printing > a directory name). > > GNU/Linux systems tend to ignore the POSIX requirement of exec'able > counterparts, although here is how Solaris effectively does it: > > $ cat /bin/cd > #!/bin/sh > exec $(basename $0) "$@" > $ > > and hard-linking that 2-liner to all of the shell builtins where POSIX > requires to have a non-builtin counterpart. > > See also http://austingroupbugs.net/view.php?id=705 > > It is only the special builtins (such as 'exit') where POSIX does not > require an exec'able counterpart. For the record I see this on Fedora 25 $ rpm -q bash bash-4.3.43-4.fc25.x86_64 $ rpm -ql bash | grep /bin/ | grep -v bash /usr/bin/alias /usr/bin/bg /usr/bin/cd /usr/bin/command /usr/bin/fc /usr/bin/fg /usr/bin/getopts /usr/bin/hash /usr/bin/jobs /usr/bin/read /usr/bin/sh /usr/bin/type /usr/bin/ulimit /usr/bin/umask /usr/bin/unalias /usr/bin/wait $ cat /usr/bin/cd #!/bin/sh builtin cd "$@" cheers, Pádraig
Re: pipefail with SIGPIPE/EPIPE
On 24/03/17 04:57, Greg Wooledge wrote: > On Thu, Mar 23, 2017 at 10:14:01PM -0700, Pádraig Brady wrote: >> OK let's not derail this into a discussion specific to errexit. >> Can we please improve things? >> You say to not use errexit, and instead use `|| exit 1` where appropriate. >> In that case can we fix this case? >> >> set -o pipefail >> yes | head -n1 || exit 1 >> echo this is skipped > > What do you think is broken, here? > > imadev:~$ yes | head -n1 > y > imadev:~$ declare -p PIPESTATUS > declare -a PIPESTATUS=([0]="141" [1]="0") > > I don't see any problem in bash's behavior. It's exiting because you > asked it to exit if the pipe failed, and the pipe failed. The pipe > failed because yes(1) returned a nonzero exit code, and you turned on > the pipefail option. > > What exactly are you asking to change? I would like bash to treat SIGPIPE specially in this case, as it's not an error, rather the standard "back pressure" mechanism used in pipes. The fact that SIGPIPE kills by default is only a shortcut mechanism allowing processes to exit automatically without changing their code. The shell should not treat this as a real error though. I.E. so I can write code like: set -o pipefail vv=$(yes | head) || exit 1 echo finished which will exit if yes(1) or head(1) segfault or otherwise exit(1) etc., but proceed normally. thanks, Pádraig
Re: pipefail with SIGPIPE/EPIPE
On 23/03/17 09:34, Greg Wooledge wrote: > On Thu, Mar 23, 2017 at 08:50:45AM -0700, Pádraig Brady wrote: >> I was bitten by this again when combined with set -e. >> I.E. this script doesn't finish: >> >> #!/bin/bash >> set -o errexit >> set -o pipefail >> yes | head -n1 >> echo finished >> >> That makes the errexit and pipefail options decidedly less useful. > > No. It makes errexit less useful. Errexit (a.k.a. set -e) is horrible, > and you should not be using it in any new shell scripts you write. > It exists solely for support of legacy scripts. OK let's not derail this into a discussion specific to errexit. Can we please improve things? You say to not use errexit, and instead use `|| exit 1` where appropriate. In that case can we fix this case? set -o pipefail yes | head -n1 || exit 1 echo this is skipped cheers, Pádraig
Re: pipefail with SIGPIPE/EPIPE
On 15/02/15 14:14, Pádraig Brady wrote: > On 15/02/15 21:59, Daniel Colascione wrote: >> On 02/15/2015 01:48 PM, Chet Ramey wrote: >>> On 2/13/15 12:19 PM, Pádraig Brady wrote: >>>> I was expecting bash to handle SIGPIPE specially here, >>>> as in this context it's informational rather than an indication of error. >>> >>> I don't agree. It's a fatal signal whose default disposition is to >>> terminate a process, which is exactly what happens in your example. >> >> The purpose of pipefail is to make the shell indicate when something has >> gone wrong anywhere in a pipeline. For most programs, SIGPIPE does not >> indicate that something went wrong. Instead, SIGPIPE is expected >> behavior. When pipefail spuriously reports expected behavior as an >> error, Bash comes less useful. > > Exactly. SIGPIPE is special. It indicates the pipe is closed. > That may be due to something having gone wrong down the pipe, > but if that's the case the status code will be that of the > failing process down the pipe. > If it's only SIGPIPE that's significant to the status, > then we know it's only informational, in which case the status > should be 0 to indicate things have gone as expected. > > There are many cases of the pipe being legitimately closed early. > ... | head > ... | grep -m1 ... > etc. > >>> You might consider trapping or ignoring SIGPIPE in situations where it >>> might be an issue. >> >> If I were emperor of the world, I would make SIGPIPE's SIG_DFL action >> terminate the process with exit status 0. But POSIX says we can't do >> that. Even locally, I make my system do that without kernel surgery. >> It's also not reasonable to modify every program that might be part of a >> pipeline so that it exits successfully on EPIPE. >> >> Making Bash treat SIGPIPE death as success is the next best option. > > Only SIG_IGN isn't reset on exec, in which case each process > would be getting EPIPE on write(), which most don't (and don't need to) > handle explicitly. bash handling the SIGPIPE specially seems > like the best option to me too. I was bitten by this again when combined with set -e. I.E. this script doesn't finish: #!/bin/bash set -o errexit set -o pipefail yes | head -n1 echo finished That makes the errexit and pipefail options decidedly less useful. Looking at this the other way, does the current behavior help, or would any existing code be depending on the current behavior of treating SIGPIPE as an error, when it's really only a shortcut informational mechanism. thanks, Pádraig
Re: ordering of printed lines changes when redirecting
On 18/07/16 14:59, Greg Wooledge wrote: > On Mon, Jul 18, 2016 at 10:22:46AM +0200, walter harms wrote: >> ( ./a.out 2>&1 ) >> hallo 5 >> hallo 6 >> hallo 7 >> hallo 8 >> hallo 9 > > (snip) > >> ./a.out >xx 2>&1 >> cat xx >> hallo 6 >> hallo 8 >> hallo 5 >> hallo 7 >> hallo 9 > > Looks like an artifact of stdio (libc) buffering. When stdout and > stderr are going to a terminal (first example), you get line buffering > (flushed after each newline), and thus the order you expect. When stdout > and stderr are going to a file, you get much larger buffers. Looks like > the flush happens implicitly when the program exits. In your second > example here, you happen to get all of the stderr lines first, followed > by all of the stdout lines. > > See if the behavior changes when you add these lines to the start of > the C program: > > setvbuf(stdout, NULL, _IOLBF, 0); > setvbuf(stderr, NULL, _IOLBF, 0); One might be able to control from outside the program as well like: stdbuf -oL your_prog
Re: shell-expand-line drops quotation marks
On 04/11/15 13:47, Chet Ramey wrote: > On 11/3/15 7:44 PM, Keith Thompson wrote: >> The shell-expand-line command (bound to Escape-Ctrl-E) incorrectly removes >> quotation marks from >> the command line, often resulting in a command that differs from what the >> user intended to type. > > This is the documented behavior. shell-expand-line performs all of the > shell word expansions, including quote removal. How useful is that though when the expansion gives a different meaning? >> I often type Escape-Ctrl-E to expand a history substitution in place >> before typing Enter, but it has the side effect of stripping quotes from >> what I've already typed. > > If you want to perform history expansion, try M-^ (history-expand-line). Yes this is useful. I've set it up to happen automatically with this in my .inputrc $if Bash # do history expansion when space entered Space: magic-space $endif cheers, Pádraig.
Re: SIGINT handling
On 24/09/15 07:20, Stephane Chazelas wrote: > 2015-09-24 07:01:23 +0100, Stephane Chazelas: >> 2015-09-23 21:27:00 -0400, Chet Ramey: >>> On 9/19/15 5:31 PM, Stephane Chazelas wrote: >>> In case it was caused by some Debian patch, I recompiled the code of 4.3.42 from gnu.org and the one from the devel branch on the git repository (commit bash-20150911 snapshot) and still: $ ./bash -c 'sh -c "trap exit INT; sleep 10; :"; echo hi' ^Chi $ ./bash -c 'sh -c "trap exit INT; sleep 10; :"; echo hi' ^Chi $ ./bash -c 'sh -c "trap exit INT; sleep 10; :"; echo hi' ^C $ ./bash -c 'sh -c "trap exit INT; sleep 10; :"; echo hi' ^Chi Sometimes (and the frequency of occurrences is erratic, generally roughly 80% of "hi"s but at times, I don't see a "hi" in a while), the "hi" doesn't show up. Note that I press ^C well after sleep has started. >>> >>> It would be nice to see a system call trace for this so we can check >>> what's going on with the timing. >> >> I don't have them logged but I did several tests in gdb >> with "handle SIGINT nostop pass" and as I said before, >> Upon the test that sets child_caught_sigint, waitpid() has not >> returned with EINTR and wait_sigint_received has been set. >> >> If I break on the SIGINT handler, I see the call trace at the >> return of the "syscall". >> >> I can try and get you a call trace later today. > [...] > > (gdb) handle SIGINT nostop pass > SIGINT is used by the debugger. > Are you sure you want to change it? (y or n) y > SignalStop Print Pass to program Description > SIGINTNoYes Yes Interrupt > (gdb) break wait_sigint_handler > Breakpoint 1 at 0x443a70: file jobs.c, line 2241. > (gdb) run > Starting program: bash-4.3/bash -c ./a\;\ echo\ x > ^C > Program received signal SIGINT, Interrupt. > > Breakpoint 1, wait_sigint_handler (sig=2) at jobs.c:2241 > 2241{ > (gdb) bt > #0 wait_sigint_handler (sig=2) at jobs.c:2241 > #1 > #2 0x776bc31c in __libc_waitpid (pid=pid@entry=-1, > stat_loc=stat_loc@entry=0x7fffdbc8, options=options@entry=0) at > ../sysdeps/unix/sysv/linux/waitpid.c:31 > #3 0x00445f3d in waitchld (block=block@entry=1, wpid=5337) at > jobs.c:3224 > #4 0x0044733b in wait_for (pid=5337) at jobs.c:2485 > #5 0x00437992 in execute_command_internal > (command=command@entry=0x70bb88, asynchronous=asynchronous@entry=0, > pipe_in=pipe_in@entry=-1, pipe_out=pipe_out@entry=-1, > fds_to_close=fds_to_close@entry=0x70bde8) at execute_cmd.c:829 > #6 0x00437b0e in execute_command (command=0x70bb88) at > execute_cmd.c:390 > #7 0x00435f23 in execute_connection (fds_to_close=0x70bdc8, > pipe_out=-1, pipe_in=-1, asynchronous=0, command=0x70bd88) at > execute_cmd.c:2494 > #8 execute_command_internal (command=0x70bd88, > asynchronous=asynchronous@entry=0, pipe_in=pipe_in@entry=-1, > pipe_out=pipe_out@entry=-1, fds_to_close=fds_to_close@entry=0x70bdc8) > at execute_cmd.c:945 > #9 0x0047955b in parse_and_execute (string=, > from_file=from_file@entry=0x4b5f96 "-c", flags=flags@entry=4) at > evalstring.c:387 > #10 0x004205d7 in run_one_command (command=) at > shell.c:1348 > #11 0x0041f524 in main (argc=3, argv=0x7fffe258, > env=0x7fffe278) at shell.c:695 > (gdb) frame 2 > #2 0x776bc31c in __libc_waitpid (pid=pid@entry=-1, > stat_loc=stat_loc@entry=0x7fffdbc8, options=options@entry=0) at > ../sysdeps/unix/sysv/linux/waitpid.c:31 > 31 ../sysdeps/unix/sysv/linux/waitpid.c: No such file or directory. > (gdb) disassemble > Dump of assembler code for function __libc_waitpid: >0x776bc300 <+0>: mov0x2f14cd(%rip),%r9d# > 0x779ad7d4 <__libc_multiple_threads> >0x776bc307 <+7>: test %r9d,%r9d >0x776bc30a <+10>:jne0x776bc336 <__libc_waitpid+54> >0x776bc30c <+12>:xor%r10d,%r10d >0x776bc30f <+15>:movslq %edx,%rdx >0x776bc312 <+18>:movslq %edi,%rdi >0x776bc315 <+21>:mov$0x3d,%eax >0x776bc31a <+26>:syscall > => 0x776bc31c <+28>:cmp$0xf000,%rax >0x776bc322 <+34>:ja 0x776bc325 <__libc_waitpid+37> >0x776bc324 <+36>:retq >0x776bc325 <+37>:mov0x2ebb3c(%rip),%rdx# > 0x779a7e68 >0x776bc32c <+44>:neg%eax >0x776bc32e <+46>:mov%eax,%fs:(%rdx) >0x776bc331 <+49>:or $0x,%rax > (gdb) fin > Run till exit from #2 0x776bc31c in __libc_waitpid > (pid=pid@entry=-1, stat_loc=stat_loc@entry=0x7fffdbc8, > options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:31 > 0x00445f3d in waitchld (block=block@entry=1, wpid=5481) at jobs.c:3224 > 3224 pid = WAITPID (-1, &status, waitpid_flags); > V
Re: 4-byte script triggers null ptr deref and segfault
On 17/09/15 18:20, Greg Wooledge wrote: > On Thu, Sep 17, 2015 at 11:50:44AM -0500, Brian Carpenter wrote: >> While fuzzing GNU bash version 4.3.42(1)-release >> (x86_64-unknown-linux-gnu) with AFL(http://lcamtuf.coredump.cx/afl), I >> stumbled upon a 4-byte 'script' that triggers a null ptr deref and causes a >> segfault. >> >> https://savannah.gnu.org/support/index.php?108885 > > Well, that's an annoying web-to-mail interface. It didn't include the > full bug report? > > The web page says the hexdump of the attached script is 3b21 2620 > which I would normally interpret as `;!& '. > > But the attached script itself is actually `!; &'. Apparently the > hex dump tool in question is doing some sort of 16-bit grouping with > little endian byte swapping. > > After getting the correct content into the script, I can reproduce > this on HP-UX in 4.3.39: > > imadev:~$ printf '!; &' > x > imadev:~$ bash x > Segmentation fault (core dumped) FWIW _not_ reproduced with bash-4.3.39-1.fc22.x86_64
Re: substitution "read all from fd" silently fails: $(<
On 01/07/15 22:48, Stephane Chazelas wrote: > 2015-07-01 22:19:10 +0300, Ilya Basin: >> Hi list. >> >> Want to read whole stdin into variable. >> Don't want to spawn new processes (cat). > [...] > > Note that > > $( execute /bin/cat in that process, it does the reading (from > file) and writing (to the pipe) by itself (and the parent reads > from the other end of the pipe to make-up the substitution). > > ksh (ksh93 and mksh) and zsh do not spawn a process in the > $(
Re: Malicious translation file can cause buffer overflow
On 30/04/15 23:08, Trammell Hudson wrote: > Configuration Information [Automatically generated, do not change]: > Machine: x86_64 > OS: linux-gnu > Compiler: gcc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' > -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-unknown-linux-gnu' > -DCONF_VENDOR='unknown' -DLOCALEDIR='/tmp/local/share/locale' > -DPACKAGE='bash' -DSHELL -DHAVE_CONFIG_H -I. -I.. -I../include -I../lib > -g -O2 > uname output: Linux hsthudson.aoa.twosigma.com 3.4.86-ts2 #3 SMP Wed Apr 9 > 03:28:16 GMT 2014 x86_64 GNU/Linux > Machine Type: x86_64-unknown-linux-gnu > > Bash Version: 4.3 > Patch Level: 30 > Release Status: release > > > Description: > The gettext translated messages for "Done", "Done(%d)" and "Exit %d" > in jobs.c are copied to a static allocated buffer. A user could set the > LANGUAGE variable to point to a malicious translation file that has > translations that are longer than 64-bytes for these strings to create > a buffer overflow. > > Since LANGUAGE is passed unchanged by sudo this might be usable for > privilege escalation. > > > Repeat-By: > Create a .po file with a bogus translation: > > #: jobs.c:1464 jobs.c:1489 > msgid "Done" > msgstr "Klaar > 123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890" > > And start an interactive shell that puts a command into the background: > > LANGUAGE="nl.utf8" PS1='$ ' ./bash --noprofile -norc > $ sleep 1 & > [1] 14464 > $ sleep 2 > [1]+ Klaar > 123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 > sleep > 1 How does one override the system translation? I thought gettext only looks in the dir passed to bindtextdomain() ?
Re: pipefail with SIGPIPE/EPIPE
On 15/02/15 21:59, Daniel Colascione wrote: > On 02/15/2015 01:48 PM, Chet Ramey wrote: >> On 2/13/15 12:19 PM, Pádraig Brady wrote: >>> I was expecting bash to handle SIGPIPE specially here, >>> as in this context it's informational rather than an indication of error. >> >> I don't agree. It's a fatal signal whose default disposition is to >> terminate a process, which is exactly what happens in your example. > > The purpose of pipefail is to make the shell indicate when something has > gone wrong anywhere in a pipeline. For most programs, SIGPIPE does not > indicate that something went wrong. Instead, SIGPIPE is expected > behavior. When pipefail spuriously reports expected behavior as an > error, Bash comes less useful. Exactly. SIGPIPE is special. It indicates the pipe is closed. That may be due to something having gone wrong down the pipe, but if that's the case the status code will be that of the failing process down the pipe. If it's only SIGPIPE that's significant to the status, then we know it's only informational, in which case the status should be 0 to indicate things have gone as expected. There are many cases of the pipe being legitimately closed early. ... | head ... | grep -m1 ... etc. >> You might consider trapping or ignoring SIGPIPE in situations where it >> might be an issue. > > If I were emperor of the world, I would make SIGPIPE's SIG_DFL action > terminate the process with exit status 0. But POSIX says we can't do > that. Even locally, I make my system do that without kernel surgery. > It's also not reasonable to modify every program that might be part of a > pipeline so that it exits successfully on EPIPE. > > Making Bash treat SIGPIPE death as success is the next best option. Only SIG_IGN isn't reset on exec, in which case each process would be getting EPIPE on write(), which most don't (and don't need to) handle explicitly. bash handling the SIGPIPE specially seems like the best option to be too. thanks, Pádraig.
pipefail with SIGPIPE/EPIPE
I was expecting bash to handle SIGPIPE specially here, as in this context it's informational rather than an indication of error. I.E. if a command to the right actually does fail the status is set to that fail and the resulting SIGPIPEs to the left are inconsequential to the status. If no command fails, then the SIGPIPEs are informational, and it seems they should also be inconsequential to the status. $ set -o pipefail $ yes | head -n1 || echo error y error cheers, Pádraig.
process substitution stdout connected to pipeline
zsh behaves as I expected: % : | tee >(md5sum) | sha1sum da39a3ee5e6b4b0d3255bfef95601890afd80709 - d41d8cd98f00b204e9800998ecf8427e - bash though seems to connect the stdout of the process substitution to the pipeline, which seems like a bug: $ : | tee >(md5sum) | sha1sum 253a7a49edee354f35b2e416554127cf29c85724 - cheers, Pádraig
Re: Command line prefixed with space is not saved
On 10/02/15 09:00, Vladimir Kanazir wrote: > Configuration Information [Automatically generated, do not change]: > Machine: x86_64 > OS: linux-gnu > Compiler: gcc > Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' > -DCONF_OSTYPE='l$ > uname output: Linux canny 3.13.0-34-generic #60-Ubuntu SMP Wed Aug 13 > 15:45:27 $ > Machine Type: x86_64-pc-linux-gnu > > Bash Version: 4.3 > Patch Level: 11 > Release Status: release > > Description: > When you type space before the command, the command is not > saved in the history. > You can't see it when arrow up / CTRL+P is pressed. It is > like the command is never executed. > > Repeat-By: > Type space, then the command. Press enter. Then press arrow up > / CTRL+P. You will get the command before that one. > That's a feature
Re: messy bash.git history
On 06/02/15 21:13, Eric Blake wrote: > Chet, > > I've noticed that your 'devel' branch in bash.git is rather messy; > basically lots of commits that snapshot the state of a directory, then a > followup commit that removes leftovers and stray files. If you were to > set your .gitignore file (to make your exclusion list public) or your > .git/info/exclude file (to keep your exclusion list local to your > repository) to the file names that you normally clean up (such as *~, > *.old, *.save, *.orig; one per line), then your imports wouldn't create > those files in the first place, you wouldn't have to do cleanup commits, > and it would be easier to follow the history to see what really changed > without being inundated by all the noise on the side file > creation-deletion loops. > That would help, but it would be better to have a standard git repo with a commit per change. I understand that existing history can't be converted to a git repo, but that shouldn't preclude pushing standard commits to that git repo from now on. It would really help in verification of bugs/fixes, and would also ease contributions to bash using standard tooling. thanks, Pádraig.
Re: which paradigms does bash support
On 26/01/15 13:43, Greg Wooledge wrote: > On Sun, Jan 25, 2015 at 08:11:41PM -0800, garegi...@gmail.com wrote: >> As a programming language which paradigms does bash support. Declarative, >> procedural, imperative? > > This belongs on help-b...@gnu.org so I'm Cc'ing that address. > > Shell scripts are procedural. It should be noted that shell programming is closely related to functional programming. I.E. functional programming maintains no external state and provides data flow synchronisation in the language. This maps closely to the UNIX filter idea; data flows in and out, with no side affects to the system. By trying to use filters and pipes instead of procedural shell statements, you get the advantage of using compiled code, and implicit multicore support etc. cheers, Pádraig.
Re: json + bash
On 28/11/14 13:02, Greg Wooledge wrote: > On Fri, Nov 28, 2014 at 10:25:52AM +0600, Sergei Tokaev wrote: >> Hi out there. Will it be good to add json support in bash internally. As >> in variable declarations: decalre "-j" "{"variable":"value"}" > > This doesn't seem like an appropriate addition to bash, in my opinion. > Where do you draw the line? Should bash also parse XML? Windows .INI > files? Comma-separated-value text files with quotes around elements > that contain commas? Apache-style log files with quotes around fields > that contain spaces? > > In my opinion, all of those things are inappropriate to add to bash. Agreed. For example to represent INI values in the shell one can: $ eval $(crudini --get example.ini section) That also does validation, like: $ crudini --get git/crudini/example.ini non-sh-compat space name útf8name 1num ls;name $ crudini --get --format=sh git/crudini/example.ini non-sh-compat 1num Invalid sh identifier: 1num As for json, I see that the jq utility has the '@sh' formatter, which I've not looked into but probably has a similar function. thanks, Pádraig.
Re: [PATCH] bracketed paste support
On 10/30/2014 07:46 PM, Bob Proulx wrote: > Chet Ramey wrote: >> * paste into the text editor invoked by the `edit-and-execute-command' key >> binding (C-xC-e in emacs mode), edit the command if desired, and have the >> shell automatically execute the contents of the editing buffer when the >> editor exits > > Cool! This was an idea that I hadn't thought about before. (I often > would paste into my editor and clean it up. But doing it as part of > the edit-and-execute-command is very nice!) > > Thanks for this hint! Be careful with this though as the editor may be more exploitable. I see vim for example interprets even in paste mode. It probably shouldn't do that in paste mode, but you can see what happens if you paste from the following innocuous looking HTML page: printf 'echo \033:q!ls' > t.html This just runs the "q" command, but could be anything of course. Pádraig.
Re: [PATCH] bracketed paste support
On 10/29/2014 09:42 PM, Daniel Colascione wrote: > On 10/29/2014 09:35 PM, Pádraig Brady wrote: >> On 10/27/2014 10:35 PM, Daniel Colascione wrote: >> >>> +@item enable-bracketed-paste >>> +@vindex enable-bracketed-paste >>> +If set to @samp{on} and the terminal supports bracketed paste mode, >>> +configure it to insert each paste into the editing buffer as a string >>> +instead of treating the characters pasted as normal input, preventing >>> +inadvertent execution of pasted commands. The default is @samp{on}. >> >> I see this defaults on. >> Does this mean one can't paste command sequences to readline now? > > Well, I don't know whether Chet left the feature enabled by default. I hope > he did, though, since preventing execution of pasted commands is one of the > feature's key benefits. In bash, you should be able to execute a pasted > command sequence by typing RET after the paste, but a paste by itself should > never begin execution. > > For better or worse, people copy and paste commands from websites all the > time. Even if a bit of shell looks innocuous, a malicious bit of JavaScript > could change the pasted text at the last second without the user being aware > of the switch. (Tynt uses this technique to slightly less malicious ends.) If > pasting in a terminal immediately begins execution, there's no opportunity to > review pasted code. With bracketed paste support, the shell requires > additional user interaction after a paste to begin execution, making this > attack much less effective. Requiring the extra RET after pasting shouldn't be too onerous. I found this to be a good summary: http://cirw.in/blog/bracketed-paste Thanks for the extra info! Pádraig.
Re: [PATCH] bracketed paste support
On 10/27/2014 10:35 PM, Daniel Colascione wrote: > +@item enable-bracketed-paste > +@vindex enable-bracketed-paste > +If set to @samp{on} and the terminal supports bracketed paste mode, > +configure it to insert each paste into the editing buffer as a string > +instead of treating the characters pasted as normal input, preventing > +inadvertent execution of pasted commands. The default is @samp{on}. I see this defaults on. Does this mean one can't paste command sequences to readline now? thanks, Pádraig.
Re: Testing for Shellshock ... combinatorics and latest(Shellshock) Bash Vulnerability...(attn: Chet Ramey)
On 10/09/2014 08:46 PM, Rick Karcich (rkarcich) wrote: > Hello Chet, > > Re: testing for Shellshock... would like your feedback... specifically, > regarding the possibility of human-directed combinatorial testing to find > this Bash vulnerability... Sounds like how Michal Zalewski found the related CVE-2014-6278 http://lcamtuf.blogspot.ie/2014/10/bash-bug-how-we-finally-cracked.html Pádraig.
Re: umask --help
On 07/30/2014 07:48 PM, Chet Ramey wrote: > On 7/30/14, 2:44 PM, Notes Jonny wrote: >> On 7 Jul 2014 19:47, "Eric Blake" wrote: >>> >>> On 07/07/2014 12:34 PM, Chris Down wrote: Hi Jon, As is standard with other buitins, umask is documented at `help umask`: >>> >>> That said, POSIX allows, and ksh already supports, the use of --help as >>> an option to ALL builtins. It might be nice if bash were to take a leaf >>> from ksh and add generic support for --help to all builtins, instead of >>> requiring users to remember 'help foo' as yet another item in their >>> arsenal alongside 'info foo', 'man foo', and 'foo --help'. >> >> Sounds good. How best to progress this, as a bugzilla ticket? > > You've already requested it as a new feature. I will evaluate it against > the other requests and implement it if it makes the grade. FWIW +1 for this feature both to minimize diffs between shells, and to have one less likely barrier hit for the important group that is first time users. cheers, Pádraig.
Re: UTF-8 printf string formating problem
On 04/06/2014 12:56 PM, Dan Douglas wrote: > On Sunday, April 06, 2014 01:24:58 PM Jan Novak wrote: >> To solve this problem I suppose to add "wide" switch to printf >> or to add "%S" format (similarly to wprintf(3) ) > > ksh93 already has this feature using the "L" modifier: > > ksh -c "printf '%.3Ls\n' $'\u2605\u2605\u2605\u2605\u2605'" > ★★★ > bash -c "printf '%.3Ls\n' $'\u2605\u2605\u2605\u2605\u2605'" > ★ > > Also, zsh does this by default with no special option. I tend to lean towards > going by character anyway because that's what most shell features such as > "read -N" do, and most work directly involving the shell is with text not > binary data. So we can count bytes, chars or cells (graphemes). Thinking a bit more about it, I think shell level printf should be dealing in text of the current encoding and counting cells. In the edge case where you want to deal in bytes one can do: LC_ALL=C printf ... I see that ksh behaves as I would expect and counts cells, though requires the explicit %L enabler: $ ksh -c "printf '%.3Ls\n' $'a\u0301\u2605\u2605\u2605'" á★★ $ ksh -c "printf '%.3Ls\n' $'A\u2605\u2605\u2605'" A★ $ ksh -c "printf '%.3Ls\n' $'AA\u2605\u2605\u2605'" A zsh seems to just count characters: $ zsh -c "printf '%.3Ls\n' $'a\u0301\u2605\u2605\u2605'" á★ $ zsh -c "printf '%.3s\n' $'a\u0301\u2605\u2605\u2605'" á★ $ zsh -c "printf '%.3Ls\n' $'A\u2605\u2605\u2605'" A★★ GNU awk seems to just count characters: $ awk 'BEGIN{printf "%.3s\n", "A★★★"}' A★★ I see that dash gives invalid directive for any of %ls %Ls %S. Pity there is no consensus here. Personally I would go for: printf '%3s' 'blah' # count cells printf '%3Ls' 'blah' # count chars LANG=C '%3Ls' 'blah' # count bytes LANG=C '%3s' 'blah' # count bytes Pádraig.
Re: improve performance of a script
On 03/25/2014 02:12 PM, xeon Mailinglist wrote: > For each file inside the directory $output, I do a cat to the file and > generate a sha256 hash. This script takes 9 minutes to read 105 files, with > the total data of 556MB and generate the digests. Is there a way to make this > script faster? Maybe generate digests in parallel? > > for path in $output > do > # sha256sum > digests[$count]=$( $HADOOP_HOME/bin/hdfs dfs -cat "$path" | sha256sum | > awk '{ print $1 }') > (( count ++ )) > done This is not a bach question so please ask in a more appropriate user oriented rather than developer oriented list in future. Off the top of my head I'd do something like the following to get xargs to parallelize: digests=( $( find "$output" -type f | xargs -I '{}' -n1 -P$(nproc) \ sh -c "$HADOOP_HOME/bin/hdfs dfs -cat '{}' | sha256sum" | cut -f1 -d' ' ) ) You might want to distribute that load across systems too with something like dxargs or perhaps something like hadoop :p thanks, Pádraig.
Re: printf cannot write more than one line to /proc/pid/uid_map
On 03/25/2014 01:57 PM, Greg Wooledge wrote: > On Tue, Mar 25, 2014 at 08:24:13PM +0900, Kusanagi Kouichi wrote: >> Description: >> Bash's builtin printf cannot write more than one line to >> /proc/pid/uid_map because printf writes one line at a time >> and uid_map can be written only once. > > Sounds like Bash is using the standard I/O library routines, in line > buffering mode (i.e. setvbuf(..., _IOLBF, ...); ). It's not clear > to me whether this can be considered a bug, as line-buffered output > is common, and situations where it fails are rare. > >> Repeat-By: >> # printf '0 0 1\n1 1 1' > /proc/31861/uid_map >> printf: write error: Operation not permitted > > As a workaround, you might consider something like: > > printf '0 0 1\n1 1 1' | dd bs=1024 > /proc/31861/uid_map Note dd will immediately write any short reads in that case too. Though not a specific issue here (since the printf will issue a single write()), to generall get dd to coalesce reads you needs to specify obs separately like: printf '0 0 1\n1 1 1' | dd obs=1024 > /proc/31861/uid_map BTW it's a bit surprising that bash doesn't use standard "default buffering modes" mentioned here: http://www.pixelbeat.org/programming/stdio_buffering/ If you want to use the external printf to achieve more standard buffering modes, you can use `env` like: env printf '%s' '0 0 1\n1 1 1' > /proc/31861/uid_map thanks, Pádraig.
Re: Builtins should canonicalize path arguments
On 01/09/2014 07:19 PM, Chet Ramey wrote: > On 1/9/14 12:42 PM, Ondrej Oprala wrote: >> Hi, I investigated this bug report: >> https://bugzilla.redhat.com/show_bug.cgi?id=987975 >> and found out that some of bash's builtins (source at the very least) do >> not canonicalize >> pathnames given as arguments (builtin "open" is instead fed with the path - >> failing in the BZ case). >> The builtin "cd" seems to handle relative paths correctly. I think it would >> be reasonable to take part of >> cd's canonicalization code and use it in other builtins as well. I'd gladly >> take care of the patch. >> Would upstream consider this a good approach? > > I have reservations. If the user in question wants consistent behavior, > I suggest he use `set -o physical' for a while and see if it does what > he wants. The solution might be that simple. See also the coreutils realpath command which might be useful in the general case: http://www.gnu.org/software/coreutils/manual/html_node/realpath-invocation.html thanks, Pádraig.
Re: locale specific ordering in EN_US -- why is a
On 5/21/12 3:42 PM, Chet Ramey wrote: > On 5/21/12 3:27 PM, Aharon Robbins wrote: > > This is why I started the Campaign For Rational Range Interpretation, > > now part of gawk and I believe in the most recent grep also, which > > returns us to the sane days of yesteryear, where [a-z] got only lowercase > > letters and [A-Z] got only uppercase ones. > > The next version of bash will have a shell option to enable this behavior. > It's in the development snapshots if anyone wants to try it out now. So what about setting globasciiranges on by default? There is a backwards compat issue, but it's probably less of a problem than having inconsistent handling of these ranges between different systems, and different tools like grep, gawk, sed, etc... To me it seems best to at least be consistent with each other, and with the intent of POSIX >= 2001. thanks, Pádraig.
Re: SIGTERM ignored before exec race
On 03/26/2013 03:23 PM, Chet Ramey wrote: > On 3/25/13 6:45 PM, Pádraig Brady wrote: > >> OK thanks for the pointer. >> So the race is narrowed rather than closed? >> As we have: >> >> execute_disk_command() >> { >> int pid = fork(); >> if (pid == 0) /* child */ >> { >>CHECK_SIGTERM; /* Honor received SIGTERM. */ >>do stuff; >>CHECK_SIGTERM; /* Honor received SIGTERM. */ >> /* --->SIGTERM still ignored if received here?<--- */ >>exec(...); >> } > > Please don't omit the code immediately following the fork that restores > the signal handlers. The execute_disk_command() code actually looks like > this: > > pid = make_child (blah...); > if (pid == 0) > { > reset_terminating_signals (); /* XXX */ > /* Cancel traps, in trap.c. */ > restore_original_signals (); > > CHECK_SIGTERM; > > ... > > exec (blah,...); Ah that looks good thanks. The two CHECK_SIGTERM; calls in that clause confused me (I'm still not sure both calls are required). In any case I can't see any races now :) > There is code in make_child that resets the sigterm status (RESET_SIGTERM), > since fork() is supposed to clear the set of pending signals anyway. right. > Please see if you can reproduce it with the current devel branch code. I can't. thanks! Pádraig.
Re: SIGTERM ignored before exec race
On 03/25/2013 02:55 PM, Chet Ramey wrote: > On 3/25/13 10:34 AM, Pádraig Brady wrote: > >> I've confirmed that bash 4.3 alpha doesn't have the issue. >> Well I can't reproduce easily at least. >> I didn't notice a NEWS item corresponding to it though. > > It's not a new feature. There are several items in CHANGES that refer to > reworked signal handling. > >> If I wanted to inspect this code change what would be the best approach? > > Look for RESET_SIGTERM and CHECK_SIGTERM in the source code (jobs.c and > execute_cmd.c, mostly) and trace the code back through quit.h to sig.c. OK thanks for the pointer. So the race is narrowed rather than closed? As we have: execute_disk_command() { int pid = fork(); if (pid == 0) /* child */ { CHECK_SIGTERM; /* Honor received SIGTERM. */ do stuff; CHECK_SIGTERM; /* Honor received SIGTERM. */ /* --->SIGTERM still ignored if received here?<--- */ exec(...); } thanks, Pádraig.
Re: SIGTERM ignored before exec race
On 02/18/2013 02:39 AM, Chet Ramey wrote: > On 2/17/13 7:46 PM, Pádraig Brady wrote: > >>>> I notice the following will wait for 5 seconds for >>>> the timeout process to end with SIGALRM, rather than >>>> immediately due to kill sending the SIGTERM. >>> >>> I think the way to approach this is to change the SIGTERM handling from >>> straight SIG_IGN to a handler installed with SA_RESTART that catches the >>> signal but does nothing with it. >>> >>> That will allow the shell to note whether it receives a SIGTERM between >>> fork and exec and react accordingly. >> >> Thanks for continuing to look at this. >> Just in case you need to consider other options, >> elaborating a bit on my previous suggestion: > > I looked at this, and it ended up being a little more complicated and a > little less flexible than the approach I adopted. > >> Your suggested method I think is to have a handler something like >> the following which should work too, but perhaps with the caveat >> that the exit status of the child before the exec might not have >> the signal bit set. > > No, much simpler. The signal handler just sets a flag. It's only > installed by interactive shells, so the flag never changes in any other > shell. Setting the flag to 0 at appropriate times and checking for > non-zero values at appropriate times is all that's needed. If a child > process finds the flag non-zero, it calls the usual terminating signal > handler, which ends up killing the shell with the same signal. I've confirmed that bash 4.3 alpha doesn't have the issue. Well I can't reproduce easily at least. I didn't notice a NEWS item corresponding to it though. If I wanted to inspect this code change what would be the best approach? thanks, Pádraig.
Re: Bash git repository on savannah
On 11/28/11 8:34 AM, Chet Ramey wrote: > On 11/28/11 4:48 AM, Roman Rakus wrote: > > On 11/28/2011 06:28 AM, Mike Frysinger wrote: > >>> I don't think I'll push every change to git as soon as it happens, but > >>> > I'm thinking about fairly frequent commits to a `bash-devel' sort of > >>> > tree. The question is whether or not enough people would be interested > >>> > in that to make the frequency worth it. > >> i would;) > >> -mike > > me too > > OK, that's three. :-) I'd also appreciate each change pushed to the devel git branch. Having 20 years of history in the coreutils git repo has been invaluable for ongoing development. thanks, Pádraig.
Re: SIGTERM ignored before exec race
On 02/17/2013 10:00 PM, Chet Ramey wrote: On 2/9/13 12:02 AM, Pádraig Brady wrote: $ rpm -q kernel glibc bash kernel-2.6.40.4-5.fc15.x86_64 glibc-2.14.1-6.x86_64 bash-4.2.10-4.fc15.x86_64 I notice the following will wait for 5 seconds for the timeout process to end with SIGALRM, rather than immediately due to kill sending the SIGTERM. I think the way to approach this is to change the SIGTERM handling from straight SIG_IGN to a handler installed with SA_RESTART that catches the signal but does nothing with it. That will allow the shell to note whether it receives a SIGTERM between fork and exec and react accordingly. Thanks for continuing to look at this. Just in case you need to consider other options, elaborating a bit on my previous suggestion: sigprocmask(sigterm_block); // ensure parent shell doesn't get TERM signal (SIGTERM, SIG_DFL);// reset to default for child to inherit fork(); if (child) { sigprocmask(sigterm_unblock); // reset /* From now any (pending) SIGTERM will cause the child process to exit with signal flag set in exit status. */ exec(); } else { signal (SIGTERM, SIG_IGN);// continue to ignore TERM sigprocmask(sigterm_unblock); // reset } Your suggested method I think is to have a handler something like the following which should work too, but perhaps with the caveat that the exit status of the child before the exec might not have the signal bit set. sigterm_handler (int sig) { if (getpid() != interactive_pid) exit(...); } thanks, Pádraig.
Re: SIGTERM ignored before exec race
On 02/10/2013 08:30 PM, Chet Ramey wrote: On 2/9/13 12:02 AM, Pádraig Brady wrote: $ rpm -q kernel glibc bash kernel-2.6.40.4-5.fc15.x86_64 glibc-2.14.1-6.x86_64 bash-4.2.10-4.fc15.x86_64 I notice the following will wait for 5 seconds for the timeout process to end with SIGALRM, rather than immediately due to kill sending the SIGTERM. I'll take a look at making the race window smaller; there is probably some code reordering that will have a beneficial effect. This race exists, to a certain extent, in all Bourne-like shells. This problem only happens when run interactively, and it happens because interactive shells ignore SIGTERM. No matter how quickly you modify a child's signal handlers after fork() returns, there's always the chance that a kernel's scheduling policies or aome global auto-nice of child or background processes will cause it to happen. You might be able to do something like: sigprocmask(sigterm_block); // ensure parent shell doesn't get TERM signal (SIGTERM, SIG_DFL);// reset to default for child to inherit fork(); signal (SIGTERM, SIG_IGN);// continue to ignore TERM sigprocmask(sigterm_unblock); // reset cheers, Pádraig.
SIGTERM ignored before exec race
$ rpm -q kernel glibc bash kernel-2.6.40.4-5.fc15.x86_64 glibc-2.14.1-6.x86_64 bash-4.2.10-4.fc15.x86_64 I notice the following will wait for 5 seconds for the timeout process to end with SIGALRM, rather than immediately due to kill sending the SIGTERM. $ timeout 5 sleep 10& pid=$!; echo $pid >&2; kill $pid; wait [1] 4895 4895 [1]+ Exit 124timeout 5 sleep 10 If you put a small sleep in, the race is avoided, and SIGTERM is sent to the timeout process. $ timeout 5 sleep 10& pid=$!; echo $pid >&2; sleep .1; kill $pid; wait [1] 4935 4935 [1]+ Exit 143timeout 5 sleep 10 I tried dash and ksh and they don't exhibit the behavior where SIGTERM is ignored. You can see here that the shell is terminating before timeout is execd dash$ timeout 5 sleep 10& pid=$!; echo $pid >&2; kill $pid; wait 5088 [1] + Terminated timeout 5 sleep 10 thanks, Pádraig.
Re: Command line arguments depend on locale
On 01/30/2013 11:05 PM, Benny Amorsen wrote: Apparently ping has now started interpreting its command line arguments depending on locale. I.e. ping -i 0.1 no longer works in locales where comma is the decimal separator. This makes it difficult to call system commands. The only workaround is to set LC_ALL to a known-good locale, but then your users get no benefit from the translations of error messages and so on. Is Linux really doomed to repeat the mistakes made by Visual Basic more than a decade ago? Yes this is a slippery slope. ping is definitely wrong to do this IMHO. A less clear cut example is printf(1). POSIX states that LC_NUMERIC controls the format of the numbers _written_. In coreutils we're careful to reset to the C locale so that strtod etc. work consistently. Just testing bash here shows it was a different interpretation which I would deem not what POSIX intended: # coreutils $ LC_NUMERIC=de_DE env printf "%f\n" 0.1 0,10 $ LC_NUMERIC=de_DE env printf "%f\n" 0,1 printf: 0,1: value not completely converted 0,00 # bash $ LC_NUMERIC=de_DE printf "%f\n" 0,1 0,10 $ LC_NUMERIC=de_DE printf "%f\n" 0.1 bash: printf: 0.1: invalid number 0,00 cheers, Pádraig
-INT_MIN/-1 => signed overflow exception
Happens on x86_64 with 4.2.10(1) and 4.2.42(2) at least The following (done in a subshell to avoid killing the current shell) demonstrates it: $ ($((-2**63/-1))) Floating point exception (core dumped) thanks, Pádraig.
Re: bug#6377: Subject: inaccurate character class processing
tags 6377 + notabug On 08/06/10 14:48, Iosif Fettich wrote: > (I'm not sure if this a bash or a coreutils issue). > > ls [A-Z]* > > doesn't work as expected/documented. The logic is in bash but it's not an issue. It's using the collating sequence of your locale $ touch a A b B z Z $ echo [A-Z]* A b B z Z $ export LANG=C $ echo [A-Z]* A B Z
Re: [Fwd: [PATCH] arithmetic -> logical shift]
Chet Ramey wrote: > Pádraig Brady wrote: >> Original Message >> Date: Tue, 07 Oct 2008 11:55:51 +0100 >> From: Pádraig Brady <[EMAIL PROTECTED]> >> To: Chet Ramey <[EMAIL PROTECTED]> >> CC: [EMAIL PROTECTED] >> >> I was just discussing bit shifting with Tim Hockin using shell >> arithmetic expansion, and he pointed out that bash and ksh >> use arithmetic rather than logical shift for the >> operator. > > Actually, bash and ksh use whatever the native C compiler implements, > since both just translate the >> and << into the same operators > internally. > > I don't see a really compelling reason to change, since, as you say, > the standard requires signed long ints. Well that means the result is compiler dependent. So scripts could give different results on another platform, or less often within a platform. Also arithmetic right shift is not useful. Pádraig.
[Fwd: [PATCH] arithmetic -> logical shift]
Original Message Date: Tue, 07 Oct 2008 11:55:51 +0100 From: Pádraig Brady <[EMAIL PROTECTED]> To: Chet Ramey <[EMAIL PROTECTED]> CC: [EMAIL PROTECTED] I was just discussing bit shifting with Tim Hockin using shell arithmetic expansion, and he pointed out that bash and ksh use arithmetic rather than logical shift for the >> operator. Now arithmetic shift is not useful on 2's compliment machines, and moreover it's compiler dependent as to whether arithmetic or logical shift is done for >>. Therefore to increase usefulness and decrease ambiguity I suggest applying something like the attached simple patch. I know the opengroup spec says to use signed ints, but I think that is intended to disambiguate input and output, rather than defining internal operations. Some sample output from the patched version: $ printf "%x\n" $((0x8000>>1)) 4000 $ smax=$((-1>>1)); echo $smax 9223372036854775807 $ echo $((-0x4000/2)) $((-0x4000>>1)) -2305843009213693952 6917529027641081856 And corresponding output from unpatched bash: $ printf "%x\n" $((0x8000>>1)) c000 $ smax=$((-1>>1)); echo $smax -1 $ echo $((-0x4000/2)) $((-0x4000>>1)) -2305843009213693952 -2305843009213693952 cheers, Pádraig. --- expr.arithmetic_shift.c 2008-10-06 07:35:09.0 + +++ expr.c 2008-10-06 07:11:44.0 + @@ -452,7 +452,7 @@ lvalue <<= value; break; case RSH: - lvalue >>= value; + lvalue = ((uintmax_t)lvalue) >> value; break; case BAND: lvalue &= value; @@ -703,7 +703,7 @@ if (op == LSH) val1 = val1 << val2; else - val1 = val1 >> val2; + val1 = ((uintmax_t)val1) >> val2; } return (val1);