2021-07-01 11:45:40 +0100, Geoff Clare via austin-group-l at The Open Group: [...] > The standard says nothing about internal buffering; it just requires > pwd to write the directory to file descriptor 1. It also states that > exit status 0 means "successful completion". [...] > If an implementor chooses to buffer the output, then it is their > responsibility to check that the buffer is successfully flushed to > fd 1 before exiting with status 0.
That seems to answer my question about whether shells are allowed to buffer / defer the output of their builtins (past their exit). Which would be a no. [...] > The GNU implementations (including bash builtins) of the POSIX utilities > do it right. Of course, I don't know whether they were already > well-behaved in this regard before they were updated to conform to > POSIX.2-1992. [...] Note that GNU rm returns failure upon: touch file && rm -fv file > /dev/full Which seems to contradict: The following exit values shall be returned: 0 Each directory entry was successfully removed, unless its removal was canceled by a non-affirmative response to a prompt for confirmation. >0 An error occurred. (the wording suggests failure to write on stdout (or any "error" that occurs after the last file was successfully removed) is not to be considered an error). The >0 should probably be moved to the top, or clarify the errors (whatever that means, again, is it at the discretion of implementor to decide what constitutes an error?) take precedence here. Same goes for many utilities. See expr for instance. Here, we've been focusing on pwd, where there's less scope for argumentation, as the "p" in pwd is for "print". It's less obvious for: if ! rm -vf -- "$file"; then ... fi Or: if expr 1 '<' 2; then... Where one could argue ignoring the write errors actually adds robustness, and not ignoring them can be dangerous (for instance allows one to change a script's logic by redirecting its stdout to /dev/full or set a low filesize limit). -- Stephane