echo test >& "quote'test"

2024-04-08 Thread squeaky
Configuration Information [Automatically generated, do not change]: Machine: 
x86_64 OS: 
linux-gnu Compiler: gcc Compilation CFLAGS: -O2 -flto=auto -ffat-lto-objects 
-fexceptions -g 
-grecord-gcc-switches -pipe -Wall -Werror=format-security 
-Wp,-D_FORTIFY_SOURCE=2 
-Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 
-mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection uname 
output: Linux 
xps 6.5.12-100.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Nov 20 22:28:44 UTC 2023 
x86_64 x86_64 
x86_64 GNU/Linux Machine Type: x86_64-redhat-linux-gnu

Bash Version: 5.2 Patch Level: 21 Release Status: release

Description:

Running
echo test >& "quote'test"
should create the file "quote'test", but it creates "quotetest" instead.

Repeat-By: echo test >& "quote'test"




Re: Examples of concurrent coproc usage?

2024-04-08 Thread Zachary Santer
On Mon, Apr 8, 2024 at 11:07 AM Chet Ramey  wrote:
>
> Bash doesn't close the file descriptor in $fd. Since it's used with `exec',
> it's under the user's control.
>
> The script here explicitly opens and closes the file descriptor, so it
> can read until read returns failure. It doesn't really matter when the
> process exits or whether the shell closes its ends of the pipe -- the
> script has made a copy that it can use for its own purposes.

> (And you need to use exec to close it when you're done.)

Caught that shortly after sending the email. Yeah, I know.

> You can do the same thing with a coproc. The question is whether or
> not scripts should have to.

If there's a way to exec fds to read from and write to the same
background process without first using the coproc keyword or using
FIFOs I'm all ears. To me, coproc fills that gap. I'd be fine with
having to close the coproc fds in subshells myself. Heck, you still
have to use exec to close at least the writing coproc fd in the parent
process to get the coproc to exit, regardless.

The fact that the current implementation allows the coproc fds to get
into process substitutions is a little weird to me. A process
substitution, in combination with exec, is kind of the one other way
to communicate with background processes through fds without using
FIFOs. I still have to close the coproc fds there myself, right now.

Consider the following situation: I've got different kinds of
background processes going on, and I've got fds exec'd from process
substitutions, fds from coprocs, and fds exec'd from other things, and
I need to keep them all out of the various background processes. Now I
need different arrays of fds, so I can close all the fds that get into
a background process forked with & without trying to close the coproc
fds there; while still being able to close all the fds, including the
coproc fds, in process substitutions.

I'm curious what the reasoning was there.



Re: Potential Bash Script Vulnerability

2024-04-08 Thread Oğuz
On Tue, Apr 9, 2024 at 5:17 AM Robert Elz  wrote:
> Sure, it is possible to make a useless program like this ...

> Almost all real commands use stdio to read stdin.   Playing about
> any more with this absurd example isn't worth the bother.   The relevant
> text should simply be deleted from POSIX.   It is bizarre and unnecessary.

Using stdio to read stdin doesn't stop you from detecting if it is
connected to a file and adjusting the file offset before exit. In
fact, POSIX mandates that for standard utilities in XCU 1.4 Utility
Defaults, INPUT FILES section:

When a standard utility reads a seekable input file and terminates
without an error before it reaches end-of-file, the utility shall
ensure that the file offset in the open file description is properly
positioned just past the last byte processed by the utility. For files
that are not seekable, the state of the file offset in the open file
description for that file is unspecified. A conforming application
shall not assume that the following three commands are equivalent:

tail -n +2 file
(sed -n 1q; cat) < file
cat file | (sed -n 1q; cat)

The second command is equivalent to the first only when the file is
seekable. The third command leaves the file offset in the open file
description in an unspecified state. Other utilities, such as head,
read, and sh, have similar properties.


If the shell doesn't cooperate with other utilities in this, this can happen:

$ cat tst.sh
echo hi
sed -n 1q
# Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla gravida odio
# ultrices, consectetur libero eu, malesuada augue. Proin commodo faucibus ipsum
# et viverra. In dictum, risus eu hendrerit rutrum, lorem diam cursus sapien,
# nec mollis urna nunc non enim. Etiam at porttitor neque. Quisque elementum
# orci in nisi egestas, sit amet pretium est tincidunt. Pellentesque eleifend
# nec tellus eu lobortis. Praesent pharetra sed neque eleifend interdum.
#
# Aenean eget tincidunt sem. Etiam ac ultricies leo. Nunc tortor ante, finibus
# in ullamcorper id, mattis sit amet ipsum. Etiam ac diam sem. Aenean a purus
# ex. Proin tincidunt erat odio, ut suscipit purus commodo nec. Curabitur eget
# ante non mi sagittis congue ac non massa. Cras tincidunt bibendum erat, ut
# gravida arcu congue eu. Phasellus ex quam, blandit at interdum at, cursus eu
# nisi. Nullam interdum faucibus massa at luctus. Nullam eu sapien ut mauris
# eleifend pharetra sit amet quis ante. Nullam porttitor enim eros, e
if false; then
rm uh-oh
fi
$
$ busybox sh 

Re: Potential Bash Script Vulnerability

2024-04-08 Thread Robert Elz
Date:Mon, 8 Apr 2024 19:35:02 +0300
From:=?UTF-8?B?T8SfdXo=?= 
Message-ID:  



  | Why not? It works fine with other shells

Sure, it is possible to make a useless program like this ...

  | $ cat tst.sh
  | cat 

Re: Potential Bash Script Vulnerability

2024-04-08 Thread Martin D Kealey
Hmm, looks like I'm partially mistaken.

Vim never does the inode pivot trick *in circumstances where I might've
noticed*, so not when the file:
- has multiple links, or
- is a symlink, or
- is in an unwritable directory, or
- otherwise appears to be something other than a plain file.

But it turns out it does pivot the inode when it thinks it won't be
noticed, which makes sense because it's less risky than overwriting a file
(which could result in data loss if the write fails).

So I've learned something new, thankyou.

-Martin

On Tue, 9 Apr 2024 at 11:13, Kerin Millar  wrote:

> On Tue, 9 Apr 2024 10:42:58 +1200
> Martin D Kealey  wrote:
>
> > On Mon, 8 Apr 2024 at 01:49, Kerin Millar  wrote:
> >
> > > the method by which vim amends files is similar to that of sed -i.
> > >
> >
> > I was about to write "nonsense, vim **never** does that for me", but
> then I
> > remembered that using ":w!" instead of ":w" (or ":wq!" instead of ":wq")
> > will write the file as normal, but if that fails, it will attempt to
> remove
> > it and create a new one. Ironically, that's precisely one of the cases
> > where using "sed -i" is a bad idea, but at least with vim you've already
> > tried ":w" and noticed that it failed, and made a considered decision to
> > use ":w!" instead.
> >
> > Except that nowadays many folk always type ":wq!" to exit vim, and never
> > put any thought into this undesirable side effect.
> >
> > I put that in the same bucket as using "kill -9" to terminate daemons, or
> > liberally using "-f" or "--force" in lots of other places. Those  are bad
> > habits, since they override useful safety checks, and I recommend making
> a
> > strenuous effort to unlearn such patterns. Then you can use these
> stronger
> > versions only when (1) the soft versions fail, and (2) you understand the
> > collateral damage, and (3) you've thought about it and decided that it's
> > acceptable in the particular circumstances.
> >
> > -Martin
> >
> > PS: I've never understood the preference for ":wq" over "ZZ" (or ":x"); I
> > want to leave the modification time unchanged if I don't edit the file.
>
> Alright. In that case, I don't know why I wasn't able to 'inject' a
> replacement command with it. I'll give it another try and see whether I can
> determine what happened.
>
> --
> Kerin Millar
>


Re: Potential Bash Script Vulnerability

2024-04-08 Thread Kerin Millar
On Tue, 9 Apr 2024 10:42:58 +1200
Martin D Kealey  wrote:

> On Mon, 8 Apr 2024 at 01:49, Kerin Millar  wrote:
> 
> > the method by which vim amends files is similar to that of sed -i.
> >
> 
> I was about to write "nonsense, vim **never** does that for me", but then I
> remembered that using ":w!" instead of ":w" (or ":wq!" instead of ":wq")
> will write the file as normal, but if that fails, it will attempt to remove
> it and create a new one. Ironically, that's precisely one of the cases
> where using "sed -i" is a bad idea, but at least with vim you've already
> tried ":w" and noticed that it failed, and made a considered decision to
> use ":w!" instead.
> 
> Except that nowadays many folk always type ":wq!" to exit vim, and never
> put any thought into this undesirable side effect.
> 
> I put that in the same bucket as using "kill -9" to terminate daemons, or
> liberally using "-f" or "--force" in lots of other places. Those  are bad
> habits, since they override useful safety checks, and I recommend making a
> strenuous effort to unlearn such patterns. Then you can use these stronger
> versions only when (1) the soft versions fail, and (2) you understand the
> collateral damage, and (3) you've thought about it and decided that it's
> acceptable in the particular circumstances.
> 
> -Martin
> 
> PS: I've never understood the preference for ":wq" over "ZZ" (or ":x"); I
> want to leave the modification time unchanged if I don't edit the file.

Alright. In that case, I don't know why I wasn't able to 'inject' a replacement 
command with it. I'll give it another try and see whether I can determine what 
happened.

-- 
Kerin Millar



Re: Potential Bash Script Vulnerability

2024-04-08 Thread Martin D Kealey
On Mon, 8 Apr 2024 at 01:49, Kerin Millar  wrote:

> the method by which vim amends files is similar to that of sed -i.
>

I was about to write "nonsense, vim **never** does that for me", but then I
remembered that using ":w!" instead of ":w" (or ":wq!" instead of ":wq")
will write the file as normal, but if that fails, it will attempt to remove
it and create a new one. Ironically, that's precisely one of the cases
where using "sed -i" is a bad idea, but at least with vim you've already
tried ":w" and noticed that it failed, and made a considered decision to
use ":w!" instead.

Except that nowadays many folk always type ":wq!" to exit vim, and never
put any thought into this undesirable side effect.

I put that in the same bucket as using "kill -9" to terminate daemons, or
liberally using "-f" or "--force" in lots of other places. Those  are bad
habits, since they override useful safety checks, and I recommend making a
strenuous effort to unlearn such patterns. Then you can use these stronger
versions only when (1) the soft versions fail, and (2) you understand the
collateral damage, and (3) you've thought about it and decided that it's
acceptable in the particular circumstances.

-Martin

PS: I've never understood the preference for ":wq" over "ZZ" (or ":x"); I
want to leave the modification time unchanged if I don't edit the file.


Re: Examples of concurrent coproc usage?

2024-04-08 Thread Chet Ramey

On 4/4/24 7:23 PM, Martin D Kealey wrote:

I'm somewhat uneasy about having coprocs inaccessible to each other.
I can foresee reasonable cases where I'd want a coproc to utilize one or 
more other coprocs.


That's not the intended purpose, so I don't think not fixing a bug to
accommodate some future hypothetical use case is a good idea. That's
why there's a warning message when you try to use more than one coproc --
the shell doesn't keep track of more than one.

If you want two processes to communicate (really three), you might want
to build with the multiple coproc support and use the shell as the
arbiter.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Potential Bash Script Vulnerability

2024-04-08 Thread Oğuz
On Mon, Apr 8, 2024 at 5:32 PM Robert Elz  wrote:
> The effect is that sharing stdin between the shell script, and other
> commands (than read), is almost certainly never going to work,

Why not? It works fine with other shells


$ cat tst.sh
cat 

Re: Examples of concurrent coproc usage?

2024-04-08 Thread Chet Ramey

On 4/4/24 8:52 AM, Carl Edquist wrote:


Zack illustrated basically the same point with his example:


exec {fd}< <( some command )
while IFS='' read -r line <&"${fd}"; do
  # do stuff
done
{fd}<&-


A process-substitution open to the shell like this is effectively a 
one-ended coproc (though not in the jobs list), and it behaves reliably 
here because the user can count on {fd} to remain open even after the child 
process terminates.


That exposes the fundamental difference. The procsub is essentially the
same kind of object as a coproc, but it exposes the pipe endpoint(s) as
filenames. The shell maintains open file descriptors to the child process
whose input or output it exposes as a FIFO or a file in /dev/fd, since
you have to have a reader and a writer. The shell closes the file
descriptor and, if necessary, removes the FIFO when the command for which
that was one of the word expansions (or a redirection) completes. coprocs
are designed to be longer-lived, and not associated with a particular
command or redirection.

But the important piece is that $fd is not the file descriptor the shell
keeps open to the procsub -- it's a new file descriptor, dup'd from the
original by the redirection. Since it was used with `exec', it persists
until the script explicitly closes it. It doesn't matter when the shell
reaps the procsub and closes the file descriptor(s) -- the copy in $fd
remains until the script explicitly closes it. You might get read returning
failure at some point, but the shell won't close $fd for you.

Since procsubs expand to filenames, even opening them is sufficient to
give you a new file descriptor (with the usual caveats about how different
OSs handle the /dev/fd device).

You can do this yourself with coprocs right now, with no changes to the
shell.


So, the user can determine when the coproc fds are no longer needed, 
whether that's when EOF is hit trying to read from the coproc, or whatever 
other condition.


Duplicating the file descriptor will do that for you.


Personally I like the idea of 'closing' a coproc explicitly, but if it's a 
bother to add options to the coproc keyword, then I would say just let the 
user be responsible for closing the fds.  Once the coproc has terminated 
_and_ the coproc's fds are closed, then the coproc can be deallocated.


This is not backwards compatible. coprocs may be a little-used feature, but
you're adding a burden on the shell programmer that wasn't there
previously.


Apparently there is already some detection in there for when the coproc fds 
get closed, as the {NAME[@]} fd array members get set to -1 automatically 
when when you do, eg, 'exec {NAME[0]}<&-'.  So perhaps this won't be a 
radical change.


Yes, there is some limited checking in the redirection code, since the
shell is supposed to manage the coproc file descriptors for the user.



Alternatively (or, additionally), you could interpret 'unset NAME' for a 
coproc to mean "deallocate the coproc."  That is, close the {NAME[@]} fds, 
unset the NAME variable, and remove any coproc bookkeeping for NAME.


Hmmm. That's not unreasonable.


What should it do to make sure that the variables don't hang around with 
invalid file descriptors?


First, just to be clear, the fds to/from the coproc pipes are not invalid 
when the coproc terminates (you can still read from them); they are only 
invalid after they are closed.


That's only sort of true; writing to a pipe for which there is no
reader generates SIGPIPE, which is a fatal signal. If the coproc
terminates, the file descriptor to write to it becomes invalid because
it's implicitly closed. If you restrict yourself to reading from coprocs,
or doing one initial write and then only reading from there on, you can
avoid this, but it's not the general case.

The surprising bit is when they become invalid unexpectedly (from the point 
of view of the user) because the shell closes them automatically, at the 
somewhat arbitrary timing when the coproc is reaped.


No real difference from procsubs.

Second, why is it a problem if the variables keep their (invalid) fds after 
closing them, if the user is the one that closed them anyway?


Isn't this how it works with the auto-assigned fd redirections?


Those are different file descriptors.



 $ exec {d}<.
 $ echo $d
 10
 $ exec {d}<&-
 $ echo $d
 10


The shell doesn't try to manage that object in the same way it does a
coproc. The user has explicitly indicated they want to manage it.


But, as noted, bash apparently already ensures that the variables don't 
hang around with invalid file descriptors, as once you close them the 
corresponding variable gets updated to "-1".


Yes, the shell trying to be helpful. It's a managed object.

If the user has explicitly closed both fd ends for a coproc, it should not 
be a surprise to the user either way - whether the variable gets unset 
automatically, or whether it remains with (-1 -1).


Since you are already 

Re: Examples of concurrent coproc usage?

2024-04-08 Thread Chet Ramey

On 4/3/24 1:19 PM, Zachary Santer wrote:

On Wed, Apr 3, 2024 at 10:32 AM Chet Ramey  wrote:


How long should the shell defer deallocating the coproc after the process
terminates? What should it do to make sure that the variables don't hang
around with invalid file descriptors? Or should the user be responsible for
unsetting the array variable too? (That's never been a requirement,
obviously.)


For sake of comparison, and because I don't know the answer, what does
bash do behind the scenes in this situation?

exec {fd}< <( some command )
while IFS='' read -r line <&"${fd}"; do
   # do stuff
done
{fd}<&-

Because the command in the process substitution isn't waiting for
input, (I think) it could've exited at any point before all of its
output has been consumed. Even so, bash appears to handle this
seamlessly.


Bash doesn't close the file descriptor in $fd. Since it's used with `exec',
it's under the user's control.

The script here explicitly opens and closes the file descriptor, so it
can read until read returns failure. It doesn't really matter when the
process exits or whether the shell closes its ends of the pipe -- the
script has made a copy that it can use for its own purposes. (And you
need to use exec to close it when you're done.)

You can do the same thing with a coproc. The question is whether or
not scripts should have to.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Potential Bash Script Vulnerability

2024-04-08 Thread Robert Elz
Date:Mon, 8 Apr 2024 12:32:13 +0300
From:=?UTF-8?B?T8SfdXo=?= 
Message-ID:  


  | The only ash clone that does this is gwsh, all others print "a" and a
  | command-not-found error.

I have (today, after your e-mail) changed the NetBSD shell so it works
like is (apparently) required (no real code ever seems to be affected):

jacaranda$ ./sh <

Re: Potential Bash Script Vulnerability

2024-04-08 Thread Chet Ramey

On 4/7/24 12:17 AM, ad...@osrc.rip wrote:

Hello everyone!

I've attached a minimal script which shows the issue, and my recommended 
solution.


Hi. Thanks for the report. This seems more like a case of mistmatched
expectations.

Bash tries, within reason, to read its input a command at a time, and to
leave child processes with the file pointer set to the location in a
script corresponding to the commands it's consumed. POSIX requires this
behavior if the shell is reading script input from stdin.

It seems like you expect the shell to read and buffer input (like stdio,
for instance) so that at any point it has read more input that it has
processed. This isn't unreasonable, but it's not how shells have behaved.

Not doing this file location sync isn't a solution to your theoretical
vulnerability, either. Since scripts are simply text files, you just have
to arrange to alter file contents beyond where the script has read and
buffered input data, subject to Kerin Millar's comments about not changing
the inode.

If you want the shell to read and parse an entire script before executing
any of it, the group command solution is a good one. This has the advantage
of potentially finding syntax errors before executing any commands, which
might be desirable.

Chet
--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Potential Bash Script Vulnerability

2024-04-08 Thread Andreas Schwab
On Apr 08 2024, Greg Wooledge wrote:

> Now imagine what happens if the shell is killed by a SIGKILL, or if
> the system simply crashes during the script's execution.  The script
> is left with altered permissions.

Or the script is executed by more than one process at the same time.

-- 
Andreas Schwab, SUSE Labs, sch...@suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."



Re: Potential Bash Script Vulnerability

2024-04-08 Thread Greg Wooledge
On Mon, Apr 08, 2024 at 02:23:18PM +0300, ad...@osrc.rip wrote:
> Btw wouldn't it be possible (and worth) temporarily revoking write access to
> the user while it's being executed as root, and restoring original rights
> after execution?

I think that would be a huge overreach.  It would also lead to a whole
lot of breakage.

Imagine that we implement this change.  It would have to be done in
the shell, since the kernel simply offloads script execution to the
interpreter.  So, your change would essentially add code to the shell
which causes it to change the permissions on a script that it's
reading, if that script is given as a command-line argument, and if
the shell's EUID is 0.  Presumably it would change the permissions
back to normal at exit.

Now imagine what happens if the shell is killed by a SIGKILL, or if
the system simply crashes during the script's execution.  The script
is left with altered permissions.



Re: Potential Bash Script Vulnerability

2024-04-08 Thread admin

On 2024-04-08 14:02, Greg Wooledge wrote:

On Mon, Apr 08, 2024 at 12:40:55PM +0700, Robert Elz wrote:

or perhaps better just:

  main() { ... } ; main "$@"


You'd want to add an "exit" as well, to protect against new lines of
code being appended to the script.


Yes that is correct. it's far easier to add new lines then to edit the 
content unnoticed, since you would have to know where you can insert or 
replace something, eg. a comment.


Btw wouldn't it be possible (and worth) temporarily revoking write 
access to the user while it's being executed as root, and restoring 
original rights after execution? The problem isn't really how it's 
executed, but that it's writable during execution...
This could of course leave the temporary rights if the process is 
killed...



Tibor



Re: Potential Bash Script Vulnerability

2024-04-08 Thread Greg Wooledge
On Mon, Apr 08, 2024 at 12:40:55PM +0700, Robert Elz wrote:
> or perhaps better just:
> 
>   main() { ... } ; main "$@"

You'd want to add an "exit" as well, to protect against new lines of
code being appended to the script.



Re: Potential Bash Script Vulnerability

2024-04-08 Thread Oğuz
On Mon, Apr 8, 2024 at 5:58 AM Robert Elz  wrote:
> Shells interpret their input in much the same way, regardless of
> from where it comes.   Would you really want your login shell to
> just collect commands that you type (possibly objecting to those
> with syntax errors) but executing none of them (including "exit")
> until you log out (send EOF) ?

On a related note, POSIX says this:

When the shell is using standard input and it invokes a command that
also uses standard input, the shell shall ensure that the standard
input file pointer points directly after the command it has read when
the command begins execution. It shall not read ahead in such a manner
that any characters intended to be read by the invoked command are
consumed by the shell (whether interpreted by the shell or not) or
that characters that are not read by the invoked command are not seen
by the shell.

So this command

sh <

Re: Potential Bash Script Vulnerability

2024-04-08 Thread Kerin Millar
On Mon, 8 Apr 2024, at 5:29 AM, John Passaro wrote:
> if you wanted this for your script - read all then start semantics, as 
> opposed to read-as-you-execute - would it work to rewrite yourself 
> inside a function?
>
> function main() { ... } ; main

Mostly, yes. My initial post in this thread spoke of it. It isn't a panacea 
because a sufficiently large compound command can cause bash to run out of 
stack space. In that case, all one can do is to break the script down further 
into additional, smaller, functions.

-- 
Kerin Millar



Re: Potential Bash Script Vulnerability

2024-04-08 Thread admin

On 2024-04-08 05:58, Robert Elz wrote:

Date:Mon, 8 Apr 2024 02:50:29 +0100
From:Kerin Millar 
Message-ID:  
<20240408025029.e7585f2f52fe510d2a686...@plushkava.net>


  | which is to read scripts in their entirety before trying to execute
  | the resulting program. To go about it that way is not typical of sh
  | implementations, for whatever reason.

Shells interpret their input in much the same way, regardless of
from where it comes.   Would you really want your login shell to
just collect commands that you type (possibly objecting to those
with syntax errors) but executing none of them (including "exit")
until you log out (send EOF) ?

kre
My answer to that would be: No! I would expect it to handle file 
execution a bit differently then terminal input.
Anyway... I reported what I found concerning, you guys know best what's 
can and worth doing about it. I'm not involved in bash's development, 
the rest is up to you.
I'm gonna put my code in {} and end with exit from now on to make it at 
least somewhat safer.


Tibor