2021-06-28 17:28:04 +0700, Robert Elz via austin-group-l at The Open Group: [...] > A more blatant case would be > > pwd | (exit 0) [...]
On the zsh mailing list, where the issue was initially brought up by Vincent, there was also the mention that EBADF error was explicitly ignore as a special case, as some systems have or used to have non-usable /dev/null, so users were using >&- instead to silence output. If stdout is closed, one could argue the user explicitly wants the output to be discarded. The main question here is whether implementations are at liberty of deciding what constitutes an error or not. utility implementations don't have to use the POSIX C API, so the spec can't say the utility should report an error if write(2), fwrite(3), fclose(3)... fails. It's also expected for some syscalls to fail in normal success conditions (like close(), isatty()-triggered ones, execve() in a $PATH lookup...) In the case of pwd, builtin or not, pwd's one and only job is to print the working directory, so it's clearer that failing to printing it should be considered an error. But what about if grep string somefile >&-; then... (same with awk...) Should failure to print prompts / warnings, verbose information on stderr be considered an error as well? What about utilities processing more than one file (cat, sed, cp...). Should the processing be stopped at the first error, or should we carry on with the next file. Are shell (and their builtins) allowed to buffer their output when it doesn't go to a terminal (and deffer it until the next time they execute something like awk does)? -- Stephane