Greetings, you almighty problem-solvers!
I'll illustrate my problem with a couple of simple examples.
Let's set aside for the moment the reason for doing something like this:
fun1 () { local tty
if [ -t 1 ] ;then tty=1; elif [ -t 2 ] ;then tty=2; elif [ -t 0 ] ;then
tty=0; else return 1; fi
local err chars
{ echo "some message"
read -t 0 chars
err=$?
echo "err: $err chars: $chars" >fun1.log
} <&$tty >&$tty
}
This works fine, regardless of how we redirect stdin, stdout, and stderr when
invoking fun1.
Now try: "fun1 & wait". The child process returns immediately, and all is good.
Now let's write another function, with a little different read command:
fun2 () { local tty
if [ -t 1 ] ;then tty=1; elif [ -t 2 ] ;then tty=2; elif [ -t 0 ] ;then
tty=0; else return 1; fi
local err chars
{ echo "some message"
IFS= read -r -d '' -t 1 chars
err=$?
echo "err: $err chars: $chars" >fun2.log
} <&$tty >&$tty
}
Try: "fun2 & wait". Again, the child process returns immediately, but now in
our processes we have an extra bash process, which is stuck there waiting for a
terminal. Well, if there's no terminal, then there's no terminal, period. It's
not like a terminal will magically appear for this process to get unstuck.
Shoudn't in this case the read command just return with some *meaningful*
return code instead?
Or better yet, shouldn't the "read -t 0" command in fun1() detect that
condition?
Or at least, shouldn't the "[ -t N ]" tests kick us out of the function to
begin with?
Without one, or the other, or the third, I can't dig myself out of predicament.
Any other test I could do before the read command, so I don't get stuck like
that? (Preferably without shelling out to external commands.)
BTW, modifying the read commands in the examples to: "read -u $tty ..." makes
no difference in the outcomes.
--
Pourko