Re: if source command.sh & set -e issue

2024-07-26 Thread Chet Ramey
On 7/24/24 1:53 PM, Mor Shalev via Bug reports for the GNU Bourne Again 
SHell wrote:



*if source command.sh ; then  echo passfi*


The man page says, about this scenario:

"The  shell  does  not  exit if the command that fails is
 part of the command list immediately following  a  while
 or  until  keyword, part of the test following the if or
 elif reserved words, part of any command executed  in  a
 &&  or || list except the command following the final &&
 or ||, any command in a pipeline but the last, or if the
 command's  return  value is being inverted with !.  If a
 compound command other than a subshell  returns  a  non-
 zero  status because a command failed while -e was being
 ignored, the shell does not exit."

The `set -e' is ignored because the `source' command is run in the test
portion of the `if' command.



Or, similarly:

source command.sh && echo pass


The set -e would be ignored because the source command is run as a non-
final part of an and-or list.


looking at the following:
morshalev@morsh:~/workspace1/soc/soc> cat script.sh
#!/bin/bash
(source command.sh) && echo pass


This is run in a subshell.



morshalev@morsh:~/workspace1/soc/soc> cat command.sh
#!/bin/bash
set -e
echo calling tool a
echo calling tool b
false
echo calling tool c
echo calling tool d


The man page says:

"If  a  compound  command or shell function executes in a
 context where -e is being ignored, none of the  commands
 executed  within  the  compound command or function body
 will be affected by the -e setting, even if  -e  is  set
 and  a  command returns a failure status.  If a compound
 command or shell function sets -e while executing  in  a
 context  where -e is ignored, that setting will not have
 any effect until the compound  command  or  the  command
 containing the function call completes."

So `set -e' doesn't take effect until the setting is no longer being
ignored. The contents of the sourced file are parsed as a compound
command.

The POSIX text is substantially similar, from

https://pubs.opengroup.org/onlinepubs/9799919799/utilities/V3_chap02.html#tag_19_26



*in 4.2 we get:*morshalev@morshalev:~/workspace16/soc2> ./script.sh
calling tool a
calling tool b

*in 4.4 we get:*
morshalev@morsh:~/workspace1/soc/soc> ./script.sh
calling tool a
calling tool b
calling tool c
calling tool d
pass


It changed in bash-4.3 as the result of

https://www.austingroupbugs.net/view.php?id=52

which codified the current requirements and

https://lists.gnu.org/archive/html/bug-bash/2012-12/msg00012.html

which specifically addressed this case.

There was extensive discussion on the posix shell mailing list in 2009
where we all hashed out the requirements and new behavior.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: 'wait -n' with and without id arguments

2024-07-26 Thread Chet Ramey

On 7/20/24 1:50 PM, Zachary Santer wrote:

Was "waiting for process substitutions"

For context:
$ git show -q origin/devel
commit 6c703092759ace29263ea96374e18412c59acc7f (origin/devel)
Author: Chet Ramey 
Date:   Thu Jul 18 16:48:17 2024 -0400

 job control cleanups; wait -n can return terminated jobs if
supplied pid arguments; wait -n can wait for process substitutions if
supplied pid arguments

On Thu, Jul 18, 2024 at 12:36 PM Chet Ramey  wrote:


On 7/14/24 8:40 PM, Zachary Santer wrote:


On Fri, Jul 5, 2024 at 2:38 PM Chet Ramey  wrote:


There is code tagged
for bash-5.4 that allows `wait -n' to look at these exited processes as
long as it's given an explicit set of pid arguments.


I agree with all the knowledgeable people here telling you that the
way 'wait -n' is still implemented in bash-5.3-alpha is obviously
wrong, but I also want to point out that the way you plan to change
its behavior in bash-5.4 still won't allow Greg's example below to
work reliably.


OK, but how would you do that? If a job has already terminated, and been
removed from the jobs list, how would you know that `wait -n' without pid
arguments should return it? There can be an arbitrary number of pids on
the list of saved pids and statuses -- the only way to clear it using wait
is to run `wait' without arguments.

You can say not to remove the job from the jobs list, which gets into the
same notification issues that originally started this discussion back in
January, and I have made a couple of changes in that area in response to
the original report that I think will address some of those. But once you
move the job from the jobs list to this list of saved pids, `wait' without
-n or pid arguments won't look for it any more (and will clear the list
when it completes). Why should `wait -n' without pid arguments do so?


'wait' without -n or pid arguments doesn't look in the list of saved
pids and statuses simply because it would serve no purpose for it to
do so. The return status will be 0, no matter how any child process
terminated, or even if there never was a child process. *

For 'wait -n', on the other hand:
"If the -n option is supplied, wait waits for a single job from the
list of ids or, if no ids are supplied, any job, to complete and
returns its exit status."
People are going to naturally expect 'wait -n' without pid arguments
to return immediately with the exit status of an already-terminated
child process, even if they don't provide id arguments. In order to do
so, 'wait -n' obviously has to look in the list of saved pids and
statuses.


I think the part you're missing is that processes get moved to this list
after the user has already been notified of their termination status.



If two jobs happen to finish simultaneously, the next call to wait -n
should reap one of them, and then the call after that should reap
the other.  That's how everyone wants it to work, as far as I've seen.

*Nobody* wants it to skip the job that happened to finish at the exact
same time as the first one, and then wait for a third job.  If that
happens in the loop above, you'll have only 4 jobs running instead of 5
from that point onward.


OK, if you can send me an example that shows bash doing the wrong thing
here, we can talk about it.



* Or in any case, 'wait' without arguments as the first command in a
script seemed to have a termination status of 0. But then the manual
says:
"If none of the supplied arguments is a child of the shell, or if no
arguments are supplied and the shell has no unwaited-for children, the
exit status is 127."


That describes the behavior of `wait -n' without arguments; it immediately
follows the sentence introducing the -n option.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: waiting for process substitutions

2024-07-26 Thread Chet Ramey

On 7/20/24 10:47 AM, Zachary Santer wrote:


I feel like that's most of the way there. I would argue 'wait -n'
without arguments should include the "last-executed process
substitution, if its process id is the same as $!," in the set of
running child processes it'll wait for, at least. Just for the sake of
consistency with 'wait' without arguments.


You could have looked at the actual commit, which contains the change log,
which says

- wait_for_any_job: check for any terminated procsubs as well as any
  terminated background jobs

wait_for_any_job is the function that backs `wait -n' without arguments.
Right now the code that restricts it to the last procsub is commented out.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: proposed BASH_SOURCE_PATH

2024-07-25 Thread Chet Ramey

On 7/18/24 4:44 PM, konsolebox wrote:

On Thu, Jul 18, 2024 at 11:02 PM Chet Ramey  wrote:


On 7/11/24 3:51 AM, konsolebox wrote:

On Thu, Jul 11, 2024 at 4:08 AM Chet Ramey  wrote:

and the BASH_SOURCE
absolute pathname discussion has been bananas, so that's not going in any
time soon.


Maybe just create BASH_SOURCE_REAL instead to avoid the gripes.


I don't think so. It's not very useful to have two variables that are so
similar -- it's needless overhead.


So I guess it's really now just about BASH_SOURCE.  What's the final
decision on this?  I don't think waiting for more input would make a
difference.


The best path forward is to enable the behavior with a new shell option,
initially disabled.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: if source command.sh & set -e issue

2024-07-24 Thread Greg Wooledge
On Wed, Jul 24, 2024 at 20:53:33 +0300, Mor Shalev via Bug reports for the GNU 
Bourne Again SHell wrote:
> *if source command.sh ; then  echo passfi*
> Or, similarly:
> 
> *source command.sh && echo pass*

Remember how -e is defined:

  -e  Exit  immediately  if a pipeline (which may consist of a
  single simple command), a list, or  a  compound  command
  (see SHELL GRAMMAR above), exits with a non-zero status.
  The shell does not exit if the  command  that  fails  is
  part  of  the command list immediately following a while
  or until keyword, part of the test following the  if  or
  elif  reserved  words, part of any command executed in a
  && or || list except the command following the final  &&
  or ||, any command in a pipeline but the last, or if the
  command's return value is being inverted with !.


With that in mind, let's re-establisg our setup, with two variants for
the initial script:


hobbit:~$ cat script1.sh
#!/bin/bash
echo script.sh begin
source command.sh && echo pass
echo script.sh end
hobbit:~$ cat script2.sh
#!/bin/bash
echo script.sh begin
source command.sh
test $? = 0 && echo pass
echo script.sh end
hobbit:~$ cat command.sh
set -e
echo command.sh start
false
echo command.sh end


Now compare:


hobbit:~$ bash-5.2 script1.sh
script.sh begin
command.sh start
command.sh end
pass
script.sh end
hobbit:~$ bash-5.2 script2.sh
script.sh begin
command.sh start


So, we can see what happened here.  In script1.sh, command.sh is sourced
by a command which is "part of any command executed in a && or || list
except the command following the final && or ||".  And therefore, set -e
does not trigger.

In script2.sh, the source command is NOT part of the command list
following while/until, nor part of the test following if/elif, nor any
part of a &&/|| list.  And set -e triggers.

So this would appear to be part of an intended change, to make the
behavior of -e satisfy the documented requirements.

Please remember, -e is *not* intended to be useful, nor is it intended
to be intuitive.  It's intended to be *bug compatible* with whatever
interpretation the POSIX committee has agreed upon this year.  This
interpretation changes over time, so the behavior of -e also changes.



Re: if source command.sh & set -e issue

2024-07-24 Thread Mor Shalev via Bug reports for the GNU Bourne Again SHell
+g...@wooledge.org

On Wed, Jul 24, 2024 at 8:53 PM Mor Shalev  wrote:

> Hi Greg,
>
> Thanks a lot for your detailed response.
> You are right, I meant "as I expected".
>
> The else in my example was redundant.
> Let's say we remain with:
>
>
> *if source command.sh ; then  echo passfi*
> Or, similarly:
>
> *source command.sh && echo pass*
> looking at the following:
> morshalev@morsh:~/workspace1/soc/soc> cat script.sh
> #!/bin/bash
> (source command.sh) && echo pass
>
> morshalev@morsh:~/workspace1/soc/soc> cat command.sh
> #!/bin/bash
> set -e
> echo calling tool a
> echo calling tool b
> false
> echo calling tool c
> echo calling tool d
>
>
> *in 4.2 we get:*morshalev@morshalev:~/workspace16/soc2> ./script.sh
> calling tool a
> calling tool b
>
> *in 4.4 we get:*
> morshalev@morsh:~/workspace1/soc/soc> ./script.sh
> calling tool a
> calling tool b
> calling tool c
> calling tool d
> pass
>
> and we get the same result in 4.4 also when adding 'set -e' to script.sh:
> morshalev@morsh:~/workspace1/soc/soc> cat ./script.sh
> #!/bin/bash
>
> *set -e*source command.sh && echo pass
>
> morshalev@morsh:~/workspace1/soc/soc> ./script.sh
> calling tool a
> calling tool b
> calling tool c
> calling tool d
> pass
>
>
> Thanks!
>
> On Wed, Jul 24, 2024 at 6:56 PM Greg Wooledge  wrote:
>
>> On Wed, Jul 24, 2024 at 16:23:35 +0300, Mor Shalev via Bug reports for
>> the GNU Bourne Again SHell wrote:
>> > script.sh contain:
>> > if source command.sh ; then
>> >   echo pass
>> > else
>> >   echo fail
>> > fi
>> > command.sh contain 'set -e' at start. so command.sh should exit once
>> detect
>> > fail.
>> >
>> > once calling ./script.sh it looks like command.sh dont handle 'set -e'
>> > correctly and it continues the script till the end anyway. (btw, it
>> works
>> > correctly at version 4.2.46(2)-release (x86_64-redhat-linux-gnu)
>>
>> Words like "correctly" lose all their meaning when set -e enters the
>> picture.  I think what you really meant is "as I expected".
>>
>> set -e *rarely* works as one expects.
>>
>> Reproducing what I think you're doing:
>>
>> hobbit:~$ cat script.sh
>> #!/bin/bash
>> if source command.sh ; then
>>   echo pass
>> else
>>   echo fail
>> fi
>> false
>> echo ending script.sh
>> hobbit:~$ cat command.sh
>> set -e
>> false
>> echo still in command.sh
>> hobbit:~$ ./script.sh
>> still in command.sh
>> pass
>> hobbit:~$ bash-4.2 script.sh
>> hobbit:~$ bash-5.0 script.sh
>> still in command.sh
>> pass
>> hobbit:~$ bash-4.4 script.sh
>> still in command.sh
>> pass
>> hobbit:~$ bash-4.3 script.sh
>> still in command.sh
>> pass
>>
>> So, it would appear that the behavior changed between 4.2 and 4.3.  I'll
>> let someone else try to dig up the reasoning behind the change.  I'm more
>> concerned with your misconceptions about set -e.
>>
>> Your bug report implies that you believe "command.sh" is a separate
>> script, which can "exit".  But this isn't the case.  You're reading the
>> lines of command.sh within the same shell process that's reading the
>> parent ./script.sh.
>>
>> If command.sh were actually to run an "exit" command, your entire script
>> would exit, not just command.sh.  If you want to terminate the sourced
>> file but *not* the whole script, you'd need to use "return" instead of
>> "exit".
>>
>> Moreover, since you've run "set -e" inside a sourced file, this turns
>> on set -e for your whole script.  If you examine $- after the source
>> command returns, you'll see that it contains "e".
>>
>> Now, look once more at how bash 4.2 behaved:
>>
>> hobbit:~$ bash-4.2 script.sh
>> hobbit:~$
>>
>> There's no output at all.  Neither "pass" nor "fail" -- because the
>> entire script exited.  Presumably as a result of the "false" command
>> inside command.sh, after set -e had been turned on.
>>
>> Is that *really* what you wanted?  It certainly doesn't sound like it,
>> based on your bug report.
>>
>


Re: if source command.sh & set -e issue

2024-07-24 Thread Mor Shalev via Bug reports for the GNU Bourne Again SHell
Hi Greg,

Thanks a lot for your detailed response.
You are right, I meant "as I expected".

The else in my example was redundant.
Let's say we remain with:


*if source command.sh ; then  echo passfi*
Or, similarly:

*source command.sh && echo pass*
looking at the following:
morshalev@morsh:~/workspace1/soc/soc> cat script.sh
#!/bin/bash
(source command.sh) && echo pass

morshalev@morsh:~/workspace1/soc/soc> cat command.sh
#!/bin/bash
set -e
echo calling tool a
echo calling tool b
false
echo calling tool c
echo calling tool d


*in 4.2 we get:*morshalev@morshalev:~/workspace16/soc2> ./script.sh
calling tool a
calling tool b

*in 4.4 we get:*
morshalev@morsh:~/workspace1/soc/soc> ./script.sh
calling tool a
calling tool b
calling tool c
calling tool d
pass

and we get the same result in 4.4 also when adding 'set -e' to script.sh:
morshalev@morsh:~/workspace1/soc/soc> cat ./script.sh
#!/bin/bash

*set -e*source command.sh && echo pass

morshalev@morsh:~/workspace1/soc/soc> ./script.sh
calling tool a
calling tool b
calling tool c
calling tool d
pass


Thanks!

On Wed, Jul 24, 2024 at 6:56 PM Greg Wooledge  wrote:

> On Wed, Jul 24, 2024 at 16:23:35 +0300, Mor Shalev via Bug reports for the
> GNU Bourne Again SHell wrote:
> > script.sh contain:
> > if source command.sh ; then
> >   echo pass
> > else
> >   echo fail
> > fi
> > command.sh contain 'set -e' at start. so command.sh should exit once
> detect
> > fail.
> >
> > once calling ./script.sh it looks like command.sh dont handle 'set -e'
> > correctly and it continues the script till the end anyway. (btw, it works
> > correctly at version 4.2.46(2)-release (x86_64-redhat-linux-gnu)
>
> Words like "correctly" lose all their meaning when set -e enters the
> picture.  I think what you really meant is "as I expected".
>
> set -e *rarely* works as one expects.
>
> Reproducing what I think you're doing:
>
> hobbit:~$ cat script.sh
> #!/bin/bash
> if source command.sh ; then
>   echo pass
> else
>   echo fail
> fi
> false
> echo ending script.sh
> hobbit:~$ cat command.sh
> set -e
> false
> echo still in command.sh
> hobbit:~$ ./script.sh
> still in command.sh
> pass
> hobbit:~$ bash-4.2 script.sh
> hobbit:~$ bash-5.0 script.sh
> still in command.sh
> pass
> hobbit:~$ bash-4.4 script.sh
> still in command.sh
> pass
> hobbit:~$ bash-4.3 script.sh
> still in command.sh
> pass
>
> So, it would appear that the behavior changed between 4.2 and 4.3.  I'll
> let someone else try to dig up the reasoning behind the change.  I'm more
> concerned with your misconceptions about set -e.
>
> Your bug report implies that you believe "command.sh" is a separate
> script, which can "exit".  But this isn't the case.  You're reading the
> lines of command.sh within the same shell process that's reading the
> parent ./script.sh.
>
> If command.sh were actually to run an "exit" command, your entire script
> would exit, not just command.sh.  If you want to terminate the sourced
> file but *not* the whole script, you'd need to use "return" instead of
> "exit".
>
> Moreover, since you've run "set -e" inside a sourced file, this turns
> on set -e for your whole script.  If you examine $- after the source
> command returns, you'll see that it contains "e".
>
> Now, look once more at how bash 4.2 behaved:
>
> hobbit:~$ bash-4.2 script.sh
> hobbit:~$
>
> There's no output at all.  Neither "pass" nor "fail" -- because the
> entire script exited.  Presumably as a result of the "false" command
> inside command.sh, after set -e had been turned on.
>
> Is that *really* what you wanted?  It certainly doesn't sound like it,
> based on your bug report.
>


Re: if source command.sh & set -e issue

2024-07-24 Thread Greg Wooledge
On Wed, Jul 24, 2024 at 16:23:35 +0300, Mor Shalev via Bug reports for the GNU 
Bourne Again SHell wrote:
> script.sh contain:
> if source command.sh ; then
>   echo pass
> else
>   echo fail
> fi
> command.sh contain 'set -e' at start. so command.sh should exit once detect
> fail.
> 
> once calling ./script.sh it looks like command.sh dont handle 'set -e'
> correctly and it continues the script till the end anyway. (btw, it works
> correctly at version 4.2.46(2)-release (x86_64-redhat-linux-gnu)

Words like "correctly" lose all their meaning when set -e enters the
picture.  I think what you really meant is "as I expected".

set -e *rarely* works as one expects.

Reproducing what I think you're doing:

hobbit:~$ cat script.sh
#!/bin/bash
if source command.sh ; then
  echo pass
else
  echo fail
fi
false
echo ending script.sh
hobbit:~$ cat command.sh
set -e
false
echo still in command.sh
hobbit:~$ ./script.sh
still in command.sh
pass
hobbit:~$ bash-4.2 script.sh
hobbit:~$ bash-5.0 script.sh
still in command.sh
pass
hobbit:~$ bash-4.4 script.sh
still in command.sh
pass
hobbit:~$ bash-4.3 script.sh
still in command.sh
pass

So, it would appear that the behavior changed between 4.2 and 4.3.  I'll
let someone else try to dig up the reasoning behind the change.  I'm more
concerned with your misconceptions about set -e.

Your bug report implies that you believe "command.sh" is a separate
script, which can "exit".  But this isn't the case.  You're reading the
lines of command.sh within the same shell process that's reading the
parent ./script.sh.

If command.sh were actually to run an "exit" command, your entire script
would exit, not just command.sh.  If you want to terminate the sourced
file but *not* the whole script, you'd need to use "return" instead of
"exit".

Moreover, since you've run "set -e" inside a sourced file, this turns
on set -e for your whole script.  If you examine $- after the source
command returns, you'll see that it contains "e".

Now, look once more at how bash 4.2 behaved:

hobbit:~$ bash-4.2 script.sh
hobbit:~$ 

There's no output at all.  Neither "pass" nor "fail" -- because the
entire script exited.  Presumably as a result of the "false" command
inside command.sh, after set -e had been turned on.

Is that *really* what you wanted?  It certainly doesn't sound like it,
based on your bug report.



Re: improving '{...}' in bash?

2024-07-24 Thread Harald Dunkel

On 2024-07-23 11:10:23, Martin D Kealey wrote:

On Tue, 23 Jul 2024, 15:50 Harald Dunkel,  wrote:


Hi folks,

This feels weird:



Did you read the manual before trying any of these?



Of course not. I just wanted to say, this {...} construct is rather
unintuitive to use. I know how to write this, but when I try to use
it, it feels I am breaking my fingers to enter the code.

No offense

Regards

Harri



Re: [PATCH] malloc: fix out-of-bounds read

2024-07-23 Thread Chet Ramey

On 7/23/24 2:08 AM, Collin Funk wrote:

Hi Chet,

Chet Ramey  writes:


/* Use this when we want to be sure that NB is in bucket NU. */
#define RIGHT_BUCKET(nb, nu) \
(((nb) > binsizes[(nu)-1]) && ((nb) <= binsizes[(nu)]))


The right fix here is two-fold: fix the first test here to evaluate to 0
if nu == 0, and change the call in internal_realloc similarly to how your
patch changes it for the nunits - 1 case.


Ah, okay I see what you mean. Thanks.

Did you want a revised patch or do you have it under control?


I got it, thanks.

Chet

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/




Re: improving '{...}' in bash?

2024-07-23 Thread Martin D Kealey
On Tue, 23 Jul 2024, 15:50 Harald Dunkel,  wrote:

> Hi folks,
>
> This feels weird:
>

Did you read the manual before trying any of these?

% echo x{1,2}x
> x1x x2x
> % echo x{1}x
> x{1}x
>

Why are you trying to use a multiplier syntax when you don't have more than
one option?

Be aware that brace expansion occurs before variable expansion, so you
can't put a brace-style list in a variable and then expect it to be
expanded; brace expansion is only intended to be used with literals, and
nobody would both to write such a literal.

Besides this, the shell is *required* not to replace braces *except* for
the few express patterns described in the manual.

% echo x{1..3,5}x
> x1..3x x5x
>

That's what I would expect, yes.

I would have expected "x1x" and "x1x x2x x3x x5x".


Not when it's missing the second pair of braces; perhaps you intended to
use `echo x{{1..3},5}x`?

-Martin


Re: [PATCH] malloc: fix out-of-bounds read

2024-07-23 Thread Collin Funk
Hi Chet,

Chet Ramey  writes:

>> /* Use this when we want to be sure that NB is in bucket NU. */
>> #define RIGHT_BUCKET(nb, nu) \
>>  (((nb) > binsizes[(nu)-1]) && ((nb) <= binsizes[(nu)]))
>
> The right fix here is two-fold: fix the first test here to evaluate to 0
> if nu == 0, and change the call in internal_realloc similarly to how your
> patch changes it for the nunits - 1 case.

Ah, okay I see what you mean. Thanks.

Did you want a revised patch or do you have it under control?

Collin



Re: [PATCH] malloc: fix out-of-bounds read

2024-07-22 Thread Chet Ramey

On 7/19/24 1:06 AM, Collin Funk wrote:

Hi,

In lib/malloc/malloc.c there is a read that occurs 1 or 2 indexes before
the first element in the buffer. The issue is this macro:


Thanks for the report. This affects calls to realloc with size < 64 bytes.



/* Use this when we want to be sure that NB is in bucket NU. */
#define RIGHT_BUCKET(nb, nu) \
(((nb) > binsizes[(nu)-1]) && ((nb) <= binsizes[(nu)]))


The right fix here is two-fold: fix the first test here to evaluate to 0
if nu == 0, and change the call in internal_realloc similarly to how your
patch changes it for the nunits - 1 case.


Chet
--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: waiting for process substitutions

2024-07-20 Thread Zachary Santer
On Thu, Jul 18, 2024 at 5:02 PM Chet Ramey  wrote:
>
> It's not in the set of changes to `wait -n' I just pushed, but it will be
> in the next push.

Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: msys
Compiler: gcc
Compilation CFLAGS: -g -O2
uname output: MSYS_NT-10.0-19045 Zack2021HPPavilion
3.5.3-d8b21b8c.x86_64 2024-07-09 18:03 UTC x86_64 Msys
Machine Type: x86_64-pc-msys

Bash Version: 5.3
Patch Level: 0
Release Status: alpha

Description:
I was going to say, it looks like 'wait -n' just couldn't wait for
procsubs at all.

Repeat-By:
./procsub-wait-n false false
./procsub-wait-n true false
./procsub-wait-n false true
./procsub-wait-n true true

I haven't figured out how to build the devel branch in MSYS2 MSYS.

Judging by your commit message:
> job control cleanups; wait -n can return terminated jobs if supplied pid 
> arguments; wait -n can wait for process substitutions if supplied pid 
> arguments

I feel like that's most of the way there. I would argue 'wait -n'
without arguments should include the "last-executed process
substitution, if its process id is the same as $!," in the set of
running child processes it'll wait for, at least. Just for the sake of
consistency with 'wait' without arguments.


procsub-wait-n
Description: Binary data


Re: pwd and prompt don't update after deleting current working directory

2024-07-19 Thread David Hedlund

On 2024-07-19 07:30, Martin D Kealey wrote:


TL;DR: what you are asking for is unsafe, and should never be added to 
any published version of any shell.


On Tue, 16 Jul 2024 at 17:47, David Hedlund  wrote:

Do you think that it would be appropriate to submit this feature
request to the developers of the rm command instead.

Given that I mistakenly suggested ` "-b" (for bounce out of the 
directory when deleted)` option (which is not feasible for the rm 
command), a more appropriate feature request for the rm command 
developers might be:Consider implementing an option for rm that prevents 
its execution when the current working directory (pwd) is within the 
target directory slated for removal. This safety feature could help 
prevent accidental deletion of directories users are currently navigating.


However, I should note that I'm not particularly proficient in bash 
scripting, so there are likely downsides or complications to this 
suggestion that I haven't considered.




This suggestion hints at some serious misunderstandings.

Firstly, under normal circumstances two processes cannot interfere 
with each others' internal states (*1) - and yes, every process has a 
/separate/ current directory as part of its internal state.


/Most/ of that internal state is copied from its parent when it 
starts, which gives the illusion that the shell is changing things in 
its children, but in reality, it's setting their starting conditions, 
and cannot influence them thereafter.


Secondly, /most/ commands that you type into a shell are separate 
programs, not part of the shell. Moreover, the /terminal/ is a 
separate program from the shell, and they can only interact through 
the tty byte stream.


Thirdly, the kernel tracks the current directory on behalf of each 
process. It tracks the directory by its identity, /not/ by its name. 
(*2) This means that you can do this:


    $ mkdir /tmp/a
    $ cd /tmp/a
    $ mv ../a ../b
    $ /bin/pwd
    /tmp/b

Note that as an efficiency measure, the built-in /`pwd/` command and 
the expansion `/$PWD/` give the answer cached by the most recent /cd/, 
so this should be considered unreliable:


    $ pwd
    /tmp/a
    $ cd -P .
    $ pwd
    /tmp/b

For comparision, caja (file manager in MATE) is stepping back as
many directories as needed when it is located in a directory that
is deleted in bash or caja.


Comparing programs with dissimilar purposes is, erm, unconvincing.

Caja's /first/ purpose is to display information about a filesystem.
To make this more comprehensible to the user, it focuses on one 
directory at a time. (*3)


Critically, every time you make a change, it shows you the results 
before you can make another change.


That is pretty much the opposite of a shell.

Bash (like other shells) is primarily a scripting language and a 
command line interface, whose purpose is to invoke other commands 
(some of which may be built-ins (*4)). The shell is supposed to /do/ 
things /without/ showing you what's happened. If you want to see the 
new state of the system, you ask it to run a program such as `/pwd/` 
or `/ls/` to show you. (*5)


Currently if a program is invoked in an unlinked current directory, 
most likely it will complain but otherwise do nothing.
But if the shell were to surreptitiously change directory, a 
subsequent command invoked in an unexpected current directory could 
wreak havoc, including deleting or overwriting the wrong files or 
running the wrong programs, and with no guarantee that there will be 
any warning indications.


All that said, if you want to risk breaking your own system, feel free 
to add the relevant commands to `/PROMPT_COMMAND/` as suggested by 
other folk.


-Martin

*1: Okay, yes there are debugging facilities, but unless the target 
program is compiled with debugging support, attempting to change the 
internal state of the other program stands a fair chance of making it 
crash instead. You certainly wouldn't want "rm" to cause your 
interactive shell to crash. And there are signals, most of which 
default to making the target program /intentionally /crash.


*2: Linux's //proc/$pid/cwd/ reconstructs the path upon request. Only 
when it's deleted does it save the old path with "(deleted)" appended.


*3: It's not even clear that this focal directory is the kernel-level 
current directory of the Caja process, but it probably is. I would 
have to read the source code to verify this.


*4: mostly /regular/ built-ins that behave as if they were separate 
programs; not to be confused with /special/ built-ins, which can do 
things to the shell's internal state.


*5: Even if the shell's prompt includes its current directory - which 
isn't the default - it could be out of date by the time the user 
presses /enter/ on their next command.


Re: pwd and prompt don't update after deleting current working directory

2024-07-18 Thread Martin D Kealey
TL;DR: what you are asking for is unsafe, and should never be added to any
published version of any shell.

On Tue, 16 Jul 2024 at 17:47, David Hedlund  wrote:

> Do you think that it would be appropriate to submit this feature request
> to the developers of the rm command instead.
>

This suggestion hints at some serious misunderstandings.

Firstly, under normal circumstances two processes cannot interfere with
each others' internal states (*1) - and yes, every process has a *separate*
current directory as part of its internal state.

*Most* of that internal state is copied from its parent when it starts,
which gives the illusion that the shell is changing things in its children,
but in reality, it's setting their starting conditions, and cannot
influence them thereafter.

Secondly, *most* commands that you type into a shell are separate programs,
not part of the shell. Moreover, the *terminal* is a separate program from
the shell, and they can only interact through the tty byte stream.

Thirdly, the kernel tracks the current directory on behalf of each process.
It tracks the directory by its identity, *not* by its name. (*2) This means
that you can do this:

$ mkdir /tmp/a
$ cd /tmp/a
$ mv ../a ../b
$ /bin/pwd
/tmp/b

Note that as an efficiency measure, the built-in *`pwd*` command and the
expansion `*$PWD*` give the answer cached by the most recent *cd*, so this
should be considered unreliable:

$ pwd
/tmp/a
$ cd -P .
$ pwd
/tmp/b

For comparision, caja (file manager in MATE) is stepping back as many
> directories as needed when it is located in a directory that is deleted in
> bash or caja.
>

Comparing programs with dissimilar purposes is, erm, unconvincing.

Caja's *first* purpose is to display information about a filesystem.
To make this more comprehensible to the user, it focuses on one directory
at a time. (*3)

Critically, every time you make a change, it shows you the results before
you can make another change.

That is pretty much the opposite of a shell.

Bash (like other shells) is primarily a scripting language and a command
line interface, whose purpose is to invoke other commands (some of which
may be built-ins (*4)). The shell is supposed to *do* things *without*
showing you what's happened. If you want to see the new state of the
system, you ask it to run a program such as `*pwd*` or `*ls*` to show you.
(*5)

Currently if a program is invoked in an unlinked current directory, most
likely it will complain but otherwise do nothing.
But if the shell were to surreptitiously change directory, a subsequent
command invoked in an unexpected current directory could wreak havoc,
including deleting or overwriting the wrong files or running the wrong
programs, and with no guarantee that there will be any warning indications.

All that said, if you want to risk breaking your own system, feel free to
add the relevant commands to `*PROMPT_COMMAND*` as suggested by other folk.

-Martin

*1: Okay, yes there are debugging facilities, but unless the target program
is compiled with debugging support, attempting to change the internal state
of the other program stands a fair chance of making it crash instead. You
certainly wouldn't want "rm" to cause your interactive shell to crash. And
there are signals, most of which default to making the target program
*intentionally
*crash.

*2: Linux's */proc/$pid/cwd* reconstructs the path upon request. Only when
it's deleted does it save the old path with "(deleted)" appended.

*3: It's not even clear that this focal directory is the kernel-level
current directory of the Caja process, but it probably is. I would have to
read the source code to verify this.

*4: mostly *regular* built-ins that behave as if they were separate
programs; not to be confused with *special* built-ins, which can do things
to the shell's internal state.

*5: Even if the shell's prompt includes its current directory - which isn't
the default - it could be out of date by the time the user presses *enter*
on their next command.


Re: waiting for process substitutions

2024-07-18 Thread Chet Ramey

On 7/14/24 8:40 PM, Zachary Santer wrote:


See my attachments, though. Something about my second set of process
substitutions is causing 'wait' without arguments to not wait for the
final procsub, whose pid is still in $! at the time.


There's an easy fix for this, thanks.

It's not in the set of changes to `wait -n' I just pushed, but it will be
in the next push.

Chet

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: proposed BASH_SOURCE_PATH

2024-07-18 Thread konsolebox
On Thu, Jul 18, 2024 at 11:02 PM Chet Ramey  wrote:
>
> On 7/11/24 3:51 AM, konsolebox wrote:
> > On Thu, Jul 11, 2024 at 4:08 AM Chet Ramey  wrote:
> >> and the BASH_SOURCE
> >> absolute pathname discussion has been bananas, so that's not going in any
> >> time soon.
> >
> > Maybe just create BASH_SOURCE_REAL instead to avoid the gripes.
>
> I don't think so. It's not very useful to have two variables that are so
> similar -- it's needless overhead.

So I guess it's really now just about BASH_SOURCE.  What's the final
decision on this?  I don't think waiting for more input would make a
difference.


-- 
konsolebox



Re: waiting for process substitutions

2024-07-18 Thread Chet Ramey

On 7/14/24 8:40 PM, Zachary Santer wrote:


On Fri, Jul 5, 2024 at 2:38 PM Chet Ramey  wrote:


There is code tagged
for bash-5.4 that allows `wait -n' to look at these exited processes as
long as it's given an explicit set of pid arguments.


I agree with all the knowledgeable people here telling you that the
way 'wait -n' is still implemented in bash-5.3-alpha is obviously
wrong, but I also want to point out that the way you plan to change
its behavior in bash-5.4 still won't allow Greg's example below to
work reliably.


OK, but how would you do that? If a job has already terminated, and been
removed from the jobs list, how would you know that `wait -n' without pid
arguments should return it? There can be an arbitrary number of pids on
the list of saved pids and statuses -- the only way to clear it using wait
is to run `wait' without arguments.

You can say not to remove the job from the jobs list, which gets into the
same notification issues that originally started this discussion back in
January, and I have made a couple of changes in that area in response to
the original report that I think will address some of those. But once you
move the job from the jobs list to this list of saved pids, `wait' without
-n or pid arguments won't look for it any more (and will clear the list 
when it completes). Why should `wait -n' without pid arguments do so?




On Fri, Jul 12, 2024 at 9:06 PM Greg Wooledge  wrote:


greg@remote:~$ cat ~greybot/factoids/wait-n; echo
Run up to 5 processes in parallel (bash 4.3): i=0 j=5; for elem in "${array[@]}"; do (( i++ < 
j )) || wait -n; my_job "$elem" & done; wait


He'd have to do something like this:
set -o noglob
i=0 j=5
declare -a pid_set=()
for elem in "${array[@]}"; do
   if (( ! i++ < j )); then
 wait -n -p terminated_pid -- "${!pid_set[@]}"
 unset pid_set[terminated_pid]
   fi
   my_job "$elem" &
   pid_set[${!}]=''
done
wait

It's probably best that 'wait -n' without arguments and 'wait -n' with
explicit pid arguments have the same relationship to each other as
'wait' without arguments and 'wait' with explicit pid arguments.


That's pretty much what we're talking about here. `wait' without arguments
doesn't look in the list of saved statuses whether -n is supplied or not.
`wait' with pid argument should look in this list whether -n is supplied or
not. But see below for the differences between `wait' with and without pid
arguments whether -n is supplied or not.




In other words, process substitutions notwithstanding,
$ wait
and
$ wait -- "${all_child_pids[@]}"
do the same thing.


That's just not true, and they're not even defined to do the same thing.
If you ask for a specific pid argument, wait will return its exit status
even if the job it belongs to has been removed from the jobs list and
saved on the list of saved pids and statuses. wait without pid arguments
just makes sure there are no running child processes and clears the list of
saved statuses -- it has no reason to look at the saved pid list before it
clears it.



So,
$ wait -n
and
$ wait -n -- "${all_child_pids[@]}"
should also do the same thing.


One issue here is that wait without arguments clears the list of saved
statuses. `wait -n' without arguments doesn't do that, but it probably
should since it's now going to have access to that list, though it would
no doubt break some existing use cases.

The other issue is as above: why should `wait -n' with no pid arguments
do anything with processes on this list? And if you think it should, what
should it do with those processes?

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: [Bug] Array declaration only includes first element

2024-07-18 Thread Lawrence Velázquez
On Thu, Jul 18, 2024, at 8:57 AM, Greg Wooledge wrote:
> $a is equivalent to ${a[0]}.

This is documented, by the way.  It is not a bug.

"Referencing an array variable without a subscript is equivalent to referencing 
with a subscript of 0."

https://www.gnu.org/software/bash/manual/html_node/Arrays.html

-- 
vq



Re: proposed BASH_SOURCE_PATH

2024-07-18 Thread Chet Ramey

On 7/11/24 3:51 AM, konsolebox wrote:

On Thu, Jul 11, 2024 at 4:08 AM Chet Ramey  wrote:

and the BASH_SOURCE
absolute pathname discussion has been bananas, so that's not going in any
time soon.


Maybe just create BASH_SOURCE_REAL instead to avoid the gripes.


I don't think so. It's not very useful to have two variables that are so
similar -- it's needless overhead.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: [Bug] Array declaration only includes first element

2024-07-18 Thread Greg Wooledge
On Thu, Jul 18, 2024 at 00:00:17 +, Charles Dong via Bug reports for the 
GNU Bourne Again SHell wrote:
> - Declare an array: `a=(aa bb cc dd)`
> - Print this array: `echo $a` or `printf $a`

$a is equivalent to ${a[0]}.  That's not how you print an entire array.

The easiest way to print an array is to use "declare -p":

hobbit:~$ a=(aa bb cc dd)
hobbit:~$ declare -p a
declare -a a=([0]="aa" [1]="bb" [2]="cc" [3]="dd")

If you want something a little less noisy, you can use "${a[*]}" to
serialize the whole array to a single string/word, or "${a[@]}" with
the double quotes to expand it to a list of words.

hobbit:~$ echo "<<${a[*]}>>"
<>
hobbit:~$ printf '<<%s>> ' "${a[@]}"; echo
<> <> <> <> 

See also .



Re: pwd and prompt don't update after deleting current working directory

2024-07-16 Thread David Hedlund



On 2024-07-16 15:31, Lawrence Velázquez wrote:

On Tue, Jul 16, 2024, at 3:47 AM, David Hedlund wrote:

Do you think that it would be appropriate to submit this feature request
to the developers of the rm command instead.

How would this help?  The rm utility cannot change the working directory of the 
shell that invokes it, or of any other process.  Even if it could, that 
wouldn't help you if a different utility/application did the unlinking.

(Never mind that there are no canonical "developers of the rm command".  GNU is 
not the only implementation in the world.)


I appreciate your input. To be honest, I'm currently juggling multiple 
tasks and don't have the necessary bandwidth to fully consider this 
particular issue at the moment. Let's table this discussion for now.




For comparision, caja (file manager in MATE) is stepping back as many
directories as needed when it is located in a directory that is deleted
in bash or caja.

Behavior that is appropriate for GUI applications is not necessarily 
appropriate for CLI utilities, and vice versa.  The comparison is inapt.



Re: pwd and prompt don't update after deleting current working directory

2024-07-16 Thread Lawrence Velázquez
On Tue, Jul 16, 2024, at 3:47 AM, David Hedlund wrote:
> Do you think that it would be appropriate to submit this feature request 
> to the developers of the rm command instead.

How would this help?  The rm utility cannot change the working directory of the 
shell that invokes it, or of any other process.  Even if it could, that 
wouldn't help you if a different utility/application did the unlinking.

(Never mind that there are no canonical "developers of the rm command".  GNU is 
not the only implementation in the world.)

> For comparision, caja (file manager in MATE) is stepping back as many 
> directories as needed when it is located in a directory that is deleted 
> in bash or caja.

Behavior that is appropriate for GUI applications is not necessarily 
appropriate for CLI utilities, and vice versa.  The comparison is inapt.

-- 
vq



Re: pwd and prompt don't update after deleting current working directory

2024-07-16 Thread Chet Ramey

On 7/16/24 3:47 AM, David Hedlund wrote:


On 2024-07-12 15:10, Chet Ramey wrote:

On 7/11/24 9:53 PM, David Hedlund wrote:

Thanks, Lawrence! I found this discussion helpful and believe it would 
be a valuable feature to add. Can I submit this as a feature request?


I'm not going to add this. It's not generally useful for interactive
shells, and dangerous for non-interactive shells.

If this is a recurring problem for you, I suggest you write a shell
function to implement the behavior you want and run it from
PROMPT_COMMAND.

That behavior could be as simple as

pwd -P >/dev/null 2>&1 || cd ..

Do you think that it would be appropriate to submit this feature request to 
the developers of the rm command instead.


You can try, but I would not expect them to implement it.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/




Re: pwd and prompt don't update after deleting current working directory

2024-07-16 Thread David Hedlund



On 2024-07-12 15:10, Chet Ramey wrote:

On 7/11/24 9:53 PM, David Hedlund wrote:

Thanks, Lawrence! I found this discussion helpful and believe it 
would be a valuable feature to add. Can I submit this as a feature 
request?


I'm not going to add this. It's not generally useful for interactive
shells, and dangerous for non-interactive shells.

If this is a recurring problem for you, I suggest you write a shell
function to implement the behavior you want and run it from
PROMPT_COMMAND.

That behavior could be as simple as

pwd -P >/dev/null 2>&1 || cd ..

Do you think that it would be appropriate to submit this feature request 
to the developers of the rm command instead.


For comparision, caja (file manager in MATE) is stepping back as many 
directories as needed when it is located in a directory that is deleted 
in bash or caja.





Re: Strange behavior after removing folder

2024-07-15 Thread Chet Ramey

On 7/14/24 2:39 PM, Батунин Сергей wrote:


Bash Version: 5.0
Patch Level: 17
Release Status: release

Description:
I entered  the following commands:
cd
   mkdir a
   cd a
   rmdir $PWD
cd .


This succeeds, because you can always cd to `.' (`chdir(".")' works) even
if you can't reach the current directory by any pathname.


Then my $PWD  became /a/./


The shell attempts to canonicalize the pathname supplied as an argument
by appending it to $PWD and trying to resolve the pathname, removing `.'
and `..' and collapsing multiple slashes to one. This fails, because it
verifies that the canonicalization results in a valid directory name,
and it doesn't. Then it tries to obtain the physical pathname of the
current directory, using the getcwd() algorithm, which also fails. The
fallback is the uncanonicalized full pathname.


Also, i entered
cd ./.
Now my $PWD is ~/a/././.


For the same reason.


Then I pressed ctrl+shift+t , and now in new window my $PWD=/
Is it correct behavior?


This doesn't have anything to do with the shell. It's probably the default
your terminal emulator uses when chdir to $PWD from the current tab fails.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Local variable can not unset within shell functions

2024-07-15 Thread Chet Ramey

On 7/11/24 8:40 PM, Robert Elz wrote:


Further, if "localness" is considered an attribute of a variable (which
isn't how I would implement it, but assuming it is) then surely declare
should have an option to set the local attribute,


It doesn't need one; using declare in a shell function is sufficient to
create a local variable.


and declare -p should
generate a command which restores that (just as it does for the export
attribute, the integer attribute, and I assume, others.


And we have made a complete circle back to where this began.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Error in "help ulimit": missing unit info

2024-07-15 Thread Chet Ramey

On 7/14/24 9:59 AM, Carlo Teubner wrote:


Bash Version: 5.2
Patch Level: 26
Release Status: release

Description:
"help ulimit" includes this paragraph:


Thanks for the report. This was changed in the devel branch some time ago.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: anonymous pipes in recursive function calls

2024-07-14 Thread Zachary Santer
On Sun, Jul 7, 2024 at 9:40 PM Zachary Santer  wrote:
>
> On Sun, Jul 7, 2024 at 2:44 PM Chet Ramey  wrote:
> >
> > Why not check the releases -- with patches -- between the two? They're
> > all available via git if you don't want to download tarballs.
>
> Completely fair. I am being a bit lazy.
>
> Really need to bite the bullet and switch to Cygwin.

For anyone who was interested...

Both issues were resolved by the final bash 5.0 patch. Can't build
older versions in MSYS without troubleshooting linking issues. Cygwin
wasn't necessary.

Thanks for your time.



Re: waiting for process substitutions

2024-07-14 Thread Zachary Santer
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: msys
Compiler: gcc
Compilation CFLAGS: -g -O2
uname output: MSYS_NT-10.0-19045 Zack2021HPPavilion
3.5.3-d8b21b8c.x86_64 2024-07-09 18:03 UTC x86_64 Msys
Machine Type: x86_64-pc-msys

Bash Version: 5.3
Patch Level: 0
Release Status: alpha

Fix:
diff --git a/general.c b/general.c
index 5c26ae38..723607eb 100644
--- a/general.c
+++ b/general.c
@@ -834,7 +834,7 @@ absolute_program (const char *string)
   return ((char *)mbschr (string, '/') != (char *)NULL);
 #else
   return ((char *)mbschr (string, '/') != (char *)NULL ||
-   (char *)mbschr (string, '\\') != (char *)NULL)
+   (char *)mbschr (string, '\\') != (char *)NULL);
 #endif
 }


On Tue, Jul 9, 2024 at 2:37 PM Zachary Santer  wrote:
>
> On the other hand, do funsubs give us the answer here?
>
> shopt -s lastpipe
> declare -a pid=()
> command-1 | tee >( command-2 ) ${ pid+=( "${!}" ); } >( command-3 ) ${
> pid+=( "${!}" ); } >( command-4 ) ${ pid+=( "${!}" ); }
> wait -- "${pid[@]}"

This absolutely works, so there you go. When expanding multiple
process substitutions as arguments to a command, you can save the $!
resulting from each one by following it with an unquoted funsub that
does that work and doesn't expand to anything.

> That looks obnoxious

I don't mind how it looks. It works.

> declare -a pid=()
> {
>   commands
> } {fd[0]}< <( command-1 )  ${ pid+=( "${!}" ); } {fd[1]}< <( command-2
> ) ${ pid+=( "${!}" ); } {fd[2]}< <( command-3 ) ${ pid+=( "${!}" ); }
>
> Do things start breaking?

Yeah, this doesn't work at all, but whatever. You can get the same
result by performing each redirection with exec individually, followed
by saving $! somewhere.

I want to say that expanding multiple process substitutions as
arguments to a single command was the one situation where you couldn't
arrange things such that you can save $! after each time a process
substitution is expanded, and funsubs seem to have solved that
problem. I won't be offended if there's still no mechanism to tell the
running script about pids of multiple child processes at the same
time, when later versions of bash come out.

Description:
See my attachments, though. Something about my second set of process
substitutions is causing 'wait' without arguments to not wait for the
final procsub, whose pid is still in $! at the time.

Repeat-By:
procsub-wait-solution false


On Fri, Jul 5, 2024 at 2:38 PM Chet Ramey  wrote:
>
> There is code tagged
> for bash-5.4 that allows `wait -n' to look at these exited processes as
> long as it's given an explicit set of pid arguments.

I agree with all the knowledgeable people here telling you that the
way 'wait -n' is still implemented in bash-5.3-alpha is obviously
wrong, but I also want to point out that the way you plan to change
its behavior in bash-5.4 still won't allow Greg's example below to
work reliably.

On Fri, Jul 12, 2024 at 9:06 PM Greg Wooledge  wrote:
>
> greg@remote:~$ cat ~greybot/factoids/wait-n; echo
> Run up to 5 processes in parallel (bash 4.3): i=0 j=5; for elem in 
> "${array[@]}"; do (( i++ < j )) || wait -n; my_job "$elem" & done; wait

He'd have to do something like this:
set -o noglob
i=0 j=5
declare -a pid_set=()
for elem in "${array[@]}"; do
  if (( ! i++ < j )); then
wait -n -p terminated_pid -- "${!pid_set[@]}"
unset pid_set[terminated_pid]
  fi
  my_job "$elem" &
  pid_set[${!}]=''
done
wait

It's probably best that 'wait -n' without arguments and 'wait -n' with
explicit pid arguments have the same relationship to each other as
'wait' without arguments and 'wait' with explicit pid arguments.

In other words, process substitutions notwithstanding,
$ wait
and
$ wait -- "${all_child_pids[@]}"
do the same thing.

So,
$ wait -n
and
$ wait -n -- "${all_child_pids[@]}"
should also do the same thing.
zsant@Zack2021HPPavilion MSYS ~/repos/bash
$ ./bash.exe ~/random/procsub-wait-solution true
+ : '5.3.0(1)-alpha'
+ wait_explicit_pids=true
+ pid=()
+ declare -a pid
++ sleep 8
++ pid+=(${!})
++ sleep 6
++ pid+=(${!})
++ sleep 4
++ pid+=(${!})
++ sleep 2
++ pid+=(${!})
+ : /dev/fd/63 /dev/fd/62 /dev/fd/61 /dev/fd/60
+ SECONDS=0
+ : 'declare -a pid=([0]="20370" [1]="20371" [2]="20372" [3]="20373")' 
'$!=20373'
+ [[ true == true ]]
+ wait -- 20370 20371 20372 20373
+ : termination status 0 at 8 seconds
+ pid=()
+ printf 'The quick brown fox jumps over the lazy dog.\n'
++ set +x
++ pid+=(${!})
++ set +x
++ pid+=(${!})
++ set +x
++ pid+=(${!})
++ set +x
++ pid+=(${!})
+ tee -- /dev/fd/63 /dev/fd/62 /dev/fd/61 /dev/fd/60
The quick brown fox jumps over the lazy dog.
+ SECONDS=0
+ : 'declare -a pid=([0]="20375" [1]="20376" [2]="20377" [3]="20378")' 
'$!=20378'
+ [[ true == true ]]
+ wait -- 20375 20376 20377 20378
overly emphatic : The. Quick. Brown. Fox. Jumps. Over. The. Lazy. Dog.
shouting : THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG.
tortoise, actually : The quick brown fox jumps over the lazy tortoise.
line length : 44
+ : termination 

Re: [bug #65981] "bash test -v" does not work as documented with "export KEY="

2024-07-13 Thread Robert Elz
Date:Sat, 13 Jul 2024 08:57:18 +0200
From:Andreas =?iso-8859-1?B?S+Ro5HJp?= 
Message-ID:  


  | After the "export", the variable has been *set*.

That's right, but there's no point answering that message here,
the anonymous poster will almost certainly never see it.
Lawrence already replied on savannah so there's no need to
do that either (I looked).

  | If you want to test whether a variable contains only an empty string,
  | use "test -z variablename".

That would be a guaranteed false.   You need test -z "${variablename}"

kre




Re: [bug #65981] "bash test -v" does not work as documented with "export KEY="

2024-07-13 Thread Andreas Kähäri
On Fri, Jul 12, 2024 at 10:15:02PM -0400, anonymous wrote:
> URL:
>   
> 
>  Summary: "bash test -v" does not work as documented with
> "export KEY="
>Group: The GNU Bourne-Again SHell
>Submitter: None
>Submitted: Sat 13 Jul 2024 02:15:01 AM UTC
> Category: None
> Severity: 3 - Normal
>   Item Group: None
>   Status: None
>  Privacy: Public
>  Assigned to: None
>  Open/Closed: Open
>  Discussion Lock: Any
> 
> 
> ___
> 
> Follow-up Comments:
> 
> 
> ---
> Date: Sat 13 Jul 2024 02:15:01 AM UTC By: Anonymous
> The documentation of the
> [https://www.gnu.org/software/bash/manual/bash.html#Bash-Conditional-Expressions
> Bash-Conditional-Expressions] "-v varname" says:
> 
> > -v varname
> >True if the shell variable varname is set (has been assigned a value).
> 
> I think "export TEST_EQUAL_WITHOUT_VALUE=" does not assign a value to the
> varname, does it?


After the "export", the variable has been *set*. The statement
also assigns an empty string to the variable, so it has a value (testing
the variable's value against an empty string would yield a boolean true
result, showing that there is a value to test with).

The "export" is unimportant, it just promotes the shell variable to an
environment variable, which isn't relevant to this issue.
  
If you want to test whether a variable contains only an empty string,
use "test -z variablename". Note that that does not test whether the
variable is *set* though (unset variables expand to empty strings too,
unless "set -u" is in effect, in which case it provokes an "unbound
variable" diagnostic from the shell).

Andreas


> Test case:
> > docker run -it --rm bash:latest
> > export TEST_EQUAL_WITHOUT_VALUE=
> > if test -v TEST_EQUAL_WITHOUT_VALUE; then echo "true"; else echo "false";
> fi
> 
> The output is "true", but there is no value assigned to it.
> 
> That is fine to me. I suggest to change the documentation and remove the "(has
> been assigned a value)" part.
> 
> 
> Tested with: GNU bash, version 5.2.26(1)-release (x86_64-pc-linux-musl)
> 
> 
> 
> 
> 
> 
> 
> ___
> 
> Reply to this item at:
> 
>   
> 
> ___
> Message sent via Savannah
> https://savannah.gnu.org/



-- 
Andreas (Kusalananda) Kähäri
Uppsala, Sweden

.



Re: waiting for process substitutions

2024-07-12 Thread Oğuz
On Saturday, July 13, 2024, Greg Wooledge  wrote:
>
> If two jobs happen to finish simultaneously, the next call to wait -n
> should reap one of them, and then the call after that should reap
> the other.  That's how everyone wants it to work, as far as I've seen.
>
> *Nobody* wants it to skip the job that happened to finish at the exact
> same time as the first one, and then wait for a third job.  If that
> happens in the loop above, you'll have only 4 jobs running instead of 5
> from that point onward.
>
>
It feels like deja vu all over again. Didn't we already discuss this and
agree that `wait -n' should wait jobs one by one without skipping any? Did
it not make it to 5.3?


-- 
Oğuz


Re: waiting for process substitutions

2024-07-12 Thread Greg Wooledge
On Sat, Jul 13, 2024 at 07:40:42 +0700, Robert Elz wrote:
> Please just change this, use the first definition of "next job to
> finish" - and in the case when there are already several of them,
> pick one, any one - you could order them by the time that bash reaped
> the jobs internally, but there's no real reason to do so, as that
> isn't necessarily the order the actual processes terminated, just
> the order the kernel picked to answer the wait() sys call, when
> there are several child zombies ready to be reaped.

This would be greatly preferred, and it's how most people *think*
wait -n currently works.

The common use case for "wait -n" is a loop that tries to process N jobs
at a time.  Such as this one:

greg@remote:~$ cat ~greybot/factoids/wait-n; echo
Run up to 5 processes in parallel (bash 4.3): i=0 j=5; for elem in 
"${array[@]}"; do (( i++ < j )) || wait -n; my_job "$elem" & done; wait

If two jobs happen to finish simultaneously, the next call to wait -n
should reap one of them, and then the call after that should reap
the other.  That's how everyone wants it to work, as far as I've seen.

*Nobody* wants it to skip the job that happened to finish at the exact
same time as the first one, and then wait for a third job.  If that
happens in the loop above, you'll have only 4 jobs running instead of 5
from that point onward.



Re: waiting for process substitutions

2024-07-12 Thread Robert Elz
Date:Fri, 12 Jul 2024 11:48:15 -0400
From:Chet Ramey 
Message-ID:  <258bcd3a-a936-4751-8e24-916fbeb9c...@case.edu>


  | Not really, since the original intent was to wait for the *next* process
  | to terminate.

There are two issues with that.   The first is "next after what", one
interpretation would be "the next after the last which was waited upon"
(one way or another).   The other, and the one you seem to imply, is
"next which terminates after now" - ie: still running when the wait
command is executed.   But that's an obvious race condition, that's
the second issue, as there is no possible way to know (in the script
which is executing "wait -n") which processes have terminated at that
instant.

Eg: let's assume I have two running bg jobs, one which is going to take
a very long time, the other which will finish fairly soon.

For this e-mail, I'll emulate those two with just "sleep", though one
of them might be a rebuild of firefox, and all its dependencies, from
sources (yes, including rust), which will take some time, and the other
is a rebuild of "true" (/bin/true not the builtin), which probably won't,
as an empty executable file is all that's required.

So, and assuming an implementation of sleep which accepts fractional
seconds:

sleep $(( 5 * 24 * 60 * 60 )) & J1=$!
sleep 0.01 & J2=$!

printf 'Just so the shell is doing something: jobs are %s & %s\n' \
"${J1}" "${J2}"

wait -n

Now which of the two background jobs is that waiting for?  Which do you
expect the script writer intended to wait for?   You can make the 2nd
sleep be "sleep 0" if you want to do a more reasonable test, just make
sure when you test, to get a valid result, you don't interrupt that wait.

The current implementation is lunacy, cannot possibly have any users,
since without doing a wait the script cannot possibly know what has
finished already, so can't possibly be explicitly excluding jobs which
just happen to have finished after the last "wait -n" (or other wait).

Of course, in the above simple example, the wait -n could be replaced
by wait "${J2}" which would work just fine, but a real example would
probably have many running jobs, some of which are very quick, and
others which aren't, and some arbitrary ones of the quick ones might
be so quick that they are finished before the script is ready to wait.
Even a firefox build might be that quick, if the options passed to
the top level make happen to contain a make syntax error, and so all
that happens is an error message (Usage:...) and very quick exit.

Please just change this, use the first definition of "next job to
finish" - and in the case when there are already several of them,
pick one, any one - you could order them by the time that bash reaped
the jobs internally, but there's no real reason to do so, as that
isn't necessarily the order the actual processes terminated, just
the order the kernel picked to answer the wait() sys call, when
there are several child zombies ready to be reaped.


  | > Bash is already tracking the pids for all child processes not waited
  | > on, internally. So I imagine it wouldn't be too much work to make that
  | > information available to the script it's running.
  |
  | So an additional feature request.

If it helps, to perhaps provide some consistency, the NetBSD shell has
a builtin:

   jobid [-g|-j|-p] [job]
 With no flags, print the process identifiers of the processes in
 the job.

(-g instead gives the process group, -j the job identifier (%n), and
-p the lead pid (that which was $! when the job was started, which might
also be the process group, but also might not be).   The "job" arg (which
defaults to '%%') can identify the job by any of the methods that wait,
or kill, or "fg" (etc) allow, that us %% %- %+ %string or a pid ($!)).
Just one "job" arg, and only one option allowed, so there's no temptation
(nor requirement) to attempt to write sh code to parse the output and
work out what is what.  It's a builtin, running it multiple times is
cheaper than any parse attempt could possibly be.

jobid exits with status 2 if there is an argument error, status 1,
if with -g the job had no separate process group, or with -p there
is no process group leader (should not happen), and otherwise
exits with status 0.

("argument error" includes both things like giving 2 options, or an
invalid (unknown) one, or giving a job arg that doesn't resolve to a
current (running, stopped, or terminated but unwaited) job.   Job
control needs to be enabled (rare in scripts) to get separate process
groups.   The "process group leader" is just $! - has no particular
relationship with actual process groups (and yes, the wording could be better).

That command can be run after each job is created, using $! as the job
arg, and saving the pids, and/or job number (for later execution when
needed) however the script likes,


Re: waiting for process substitutions

2024-07-12 Thread Chet Ramey

On 7/9/24 6:12 AM, Zachary Santer wrote:

On Fri, Jul 5, 2024 at 2:38 PM Chet Ramey  wrote:


On 6/29/24 10:51 PM, Zachary Santer wrote:

so you were then able to wait for each process substitution individually,
as long as you saved $! after they were created. `wait' without arguments
would still wait for all process substitutions (procsub_waitall()), but
the man page continued to guarantee only waiting for the last one.

This was unchanged in bash-5.2. I changed the code to match what the man
page specified in 10/2022, after

https://lists.gnu.org/archive/html/bug-bash/2022-10/msg00107.html


Is what's being reported there undesirable behavior? 


Yes, of course. It shouldn't hang, even if there is a way to work around
it. The process substitution and the subshell where `wait' is running
don't necessarily have a strict parent-child relationship, even if bash
optimizes away another fork for the subshell.



On the other hand, allowing 'wait' without arguments to wait on all
process substitutions would allow my original example to work, in the
case that there aren't other child processes expected to outlive this
pipeline.


So you're asking for a new feature, probably controlled by a new shell
option.


We've discussed this before. `wait -n' waits for the next process to
terminate; it doesn't look back at processes that have already terminated
and been added to the list of saved exit statuses. There is code tagged
for bash-5.4 that allows `wait -n' to look at these exited processes as
long as it's given an explicit set of pid arguments.


I read through some of that conversation at the time. Seemed like an
obvious goof. Kind of surprised the fix isn't coming to bash 5.3,
honestly.


Not really, since the original intent was to wait for the *next* process
to terminate. That didn't change when the ability to wait for explicit
pids was added.


And why "no such job" instead of "not a child of this shell"?


Because wait -n takes pid arguments that are part of jobs.



They're similar, but they're not jobs. They run in the background, but you
can't use the same set of job control primitives to manipulate them.
Their scope is expected to be the lifetime of the command they're a part
of, not run in the background until they're wanted.


Would there be a downside to making procsubs jobs?


If you want to treat them like jobs, you can do that. It just means doing
more work using mkfifo and giving up on using /dev/fd. I don't see it as
being worth the work to do it internally.



Consider my original example:
command-1 | tee >( command-2 ) >( command-3 ) >( command-4 )

Any nontrivial command is going to take more time to run than it took
to be fed its input.


In some cases, yes.


The idea that no process in a process
substitution will outlive its input stream precludes a reading process
substitution from being useful.


It depends on whether or not it can cope with its input (in this case)
file descriptor being invalidated. In some cases, yes, in some cases, no.


When you say "invalidated," are you referring to something beyond the
process in a reading process substitution simply receiving EOF?
Everything should be able to handle that much.


They're pipes, so there are more semantics beyond receiving EOF on reading.
Writing on a pipe where the reader has gone away, for example, like below.



And nevermind
exec {fd}< <( command )
I shouldn't do this?


Sure, of course you can. You've committed to managing the file descriptor
yourself at this point, like any other file descriptor you open with exec.


But then, if I 'exec {fd}<&-' before consuming all of command's
output, I would expect it to receive SIGPIPE and die, if it hasn't
already completed. And I might want to ensure that this child process
has terminated before the calling script exits.


Then save $! and wait for it. The only change we're talking about here is
to accommodate your request to be able to wait for multiple process
substitutions created before you have a chance to save all of the pids.




Why should these be different in practice?

(1)
mkfifo named-pipe
child process command < named-pipe &
{
foreground shell commands
} > named-pipe

(2)
{
foreground shell commands
} > >( child process command )


Because you create a job with one and not the other, explicitly allowing
you to manipulate `child' directly?


Right, but does it have to be that way? What if the asynchronous
processes in process substitutions were jobs?


If you want them to work that way, take a shot at it. I don't personally
think it's worth the effort.


If you need to capture all the PIDs of all your background processes,
you'll have to launch them one at a time.  This may mean using FIFOs
(named pipes) instead of anonymous process substitutions, in some cases.


Bash is already tracking the pids for all child processes not waited
on, internally. So I imagine it wouldn't be too much work to make that
information available to the script it's running.


So an 

Re: pwd and prompt don't update after deleting current working directory

2024-07-12 Thread Chet Ramey

On 7/11/24 9:53 PM, David Hedlund wrote:

Thanks, Lawrence! I found this discussion helpful and believe it would be a 
valuable feature to add. Can I submit this as a feature request?


I'm not going to add this. It's not generally useful for interactive
shells, and dangerous for non-interactive shells.

If this is a recurring problem for you, I suggest you write a shell
function to implement the behavior you want and run it from
PROMPT_COMMAND.

That behavior could be as simple as

pwd -P >/dev/null 2>&1 || cd ..

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread Phi Debian
Oops I forgot the reply to all in my reply to @David Hedlund


I was on the same tune as you @g...@wooledge.org .

==
On Fri, Jul 12, 2024 at 12:14 AM David Hedlund  wrote:

>
>
> When a directory is deleted while the user is inside it, the terminal
> should automatically return to the parent directory.
>
>
Jeez, this looks like pretty dangerous isn't it, imagine a script (or a
fast typer for interactive shell) the is doing some bookeeping inside a
directory, (i.e removing some files in the $PWD), and $PWD is unlinked by
another random process, then your script (or fast typer, who type and read
afterwards) would then continue its bookeeping in the parent (of login
shell dir?). It make no sense.

So when a $PWD is unlinked while working in it, the only things a shell can
do is to emit a message like

$ rm -rf $PWD
$ >xyz
bash: xyz: No such file or directory

Now you may argue this is not explicit enough, nothing in there tells you
that the $PWD was unexpectedly unlinked. You may implement something more
explicit like this for interactive shell, for script simply bail out on
error.

$ PS1='$(ls $PWD>/dev/null 2>&1||echo $PWD was unlinked)\n$ '

$ pwd
/home/phi/tmp/yo

$ ls
asd  qwe  rty

$ rm -rf $PWD
/home/phi/tmp/yo was unlinked
$ ls
/home/phi/tmp/yo was unlinked
$ cd ..

$ mkdir -p yo/asd ; >yo/qwe >yo/rty ; cd yo

$ pwd
/home/phi/tmp/yo

=

Bottom line, this is not a bug, and there is no enhancement request here.

Or 'may be'
on bash error
bash: xyz: No such file or directory
the full path could be expanded

bash: /home/phi/tmp/yo/xyz: No such file or directory

But even that I am not sure.


Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread Greg Wooledge
On Fri, Jul 12, 2024 at 10:26:54 +0700, Robert Elz wrote:
> Is it supposed to continually run "stat($PWD, ...)" forever (after
> all the directory might be removed from elsewhere while you're in
> the middle of typing a command - what do you expect to happen then?)

It's even worse: let's say a new option is added, to have bash stat $PWD
before or after every command is executed.  If the stat fails, then
bash changes directory.

Then let's say you write a script, and run it under bash using this
new option.  If the script's working directory is unlinked, and this
new option triggers, then bash will change its working directory.

This could happen in between *any* pair of commands.  The script won't
even know that it happened, and won't be expecting it.

Essentially, this would make it impossible to use any relative pathnames
safely.  A script has to *know* what its working directory is, or it has
to use only absolute pathnames.  Otherwise, something like this:

cd "$mydir" || exit
touch "$tmpfile"
...
rm -f "$tmpfile"

could end up removing a temp file (accessed via a relative pathname)
from the wrong directory, because the working directory changed before
the rm command was executed.



Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread Robert Elz
Date:Fri, 12 Jul 2024 03:53:01 +0200
From:David Hedlund 
Message-ID:  <820e6ee2-7444-4a01-991a-3530c2591...@gnu.org>

  | Thanks, Lawrence! I found this discussion helpful and believe it would 
  | be a valuable feature to add. Can I submit this as a feature request?

You can try, but don't expect it to succeed.   Apart from this not
being desirable behaviour at all (the directory bash is in still
exists as long as it is being referenced, there's just no longer a
path which reaches it from "/") how exactly do you expect bash to
discover that the directory it is in has been orphaned like that?

Is it supposed to continually run "stat($PWD, ...)" forever (after
all the directory might be removed from elsewhere while you're in
the middle of typing a command - what do you expect to happen then?)

kre




Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread Lawrence Velázquez
On Thu, Jul 11, 2024, at 9:53 PM, David Hedlund wrote:
> On 2024-07-12 00:54, Lawrence Velázquez wrote:
>> (You're free to argue that bash *should* behave this way, but that's
>> a feature request, not a bug report.  And having bash automatically
>> update its working directory based on filesystem changes would open
>> up its own can of worms.)
>>
> Thanks, Lawrence! I found this discussion helpful and believe it would 
> be a valuable feature to add. Can I submit this as a feature request?

You've effectively done so already; there's no separate process to
follow.  This discussion is the feature request, and the maintainer
will read it eventually.  I only drew the distinction because the
current behavior is intentional, so your argument cannot be "bash
has a bug".

-- 
vq



Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread Eduardo Bustamante
On Thu, Jul 11, 2024 at 7:32 PM David Hedlund  wrote:
(...)

> I understand. So the feature request should be an option "-b" (for
> bounce out of the directory when deleted) for example?
>

It'd be helpful to know about the use cases for such feature though.

My assumptions:

- This would only be useful in interactive sessions. Portable scripts would
not have access to this feature, and so in these cases the script writer
must explicitly consider the case where $PWD is unlinked while the process
is running. Or decide that it's not worth handling.

- In an interactive session: What are the conditions that lead to a
directory being removed without the user's knowledge?

- Also, what's wrong with letting a user deal with this by issuing `cd ..`?
Surely, this doesn't happen frequently enough that it merits increasing the
complexity of Bash to handle it automatically.


Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread Eduardo Bustamante
On Thu, Jul 11, 2024 at 7:20 PM David  wrote:
(...)

> Hi, I disagree, and I think if you understand better why this occurs, you
> will understand why knowledgable users will disagree, and you will
> change your opinion.
>

I concur. The requested feature changes behavior in a
backwards-incompatible way, and thus it is guaranteed to break existing
scripts. The semantics of file removal in POSIX systems is well-defined,
and users of Bash and other POSIX compliant shells should make an effort to
learn about them, instead of changing every system to match their mental
model.


Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread David Hedlund

On 2024-07-12 04:19, David wrote:

n Thu, 11 Jul 2024 at 22:14, David Hedlund  wrote:


When a directory is deleted while the user is inside it, the terminal
should automatically return to the parent directory.

Hi, I disagree, and I think if you understand better why this occurs, you
will understand why knowledgable users will disagree, and you will
change your opinion.

This is a fundamental aspect of how Unix-like operating systems work,
and it will not be changed because it is very useful in other situations.
It occurs because of the designed behaviour of the 'unlink' system call.
You can read about that in 'man 2 unlink'.


 Expected behaviour
When a directory is deleted while the user is inside it, the terminal
should automatically return to the parent directory.
 Actual behaviour
The terminal remains in the deleted directory's path, even though the
directory no longer exists.

Your final phrase there "the directory no longer exists" is incorrect.

The directory does still exist. The 'rm' command did not destroy it.
Any processes that have already opened it can continue to use it.
The terminal is one of those processes.

Deleting any file (including your directory, because directories have
file-like behaviour in this respect, same as every other directory entry)
just removes that file object from its parent directory entries. It does
not destroy the file in any way. That means that no new processes
can access the file, because now there's no normal way to discover
that it exists, because it no longer appears in its parent directory entries.
But any process that already has the file open can continue to use it.

So your directory does not cease to exist until nothing is using it, and
even then it is not destroyed, merely forgotten entirely.

Here's more explanation:
   https://en.wikipedia.org/wiki/Rm_(Unix)#Overview

I understand. So the feature request should be an option "-b" (for 
bounce out of the directory when deleted) for example?


Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread David
n Thu, 11 Jul 2024 at 22:14, David Hedlund  wrote:

> When a directory is deleted while the user is inside it, the terminal
> should automatically return to the parent directory.

Hi, I disagree, and I think if you understand better why this occurs, you
will understand why knowledgable users will disagree, and you will
change your opinion.

This is a fundamental aspect of how Unix-like operating systems work,
and it will not be changed because it is very useful in other situations.
It occurs because of the designed behaviour of the 'unlink' system call.
You can read about that in 'man 2 unlink'.

>  Expected behaviour
> When a directory is deleted while the user is inside it, the terminal
> should automatically return to the parent directory.

>  Actual behaviour
> The terminal remains in the deleted directory's path, even though the
> directory no longer exists.

Your final phrase there "the directory no longer exists" is incorrect.

The directory does still exist. The 'rm' command did not destroy it.
Any processes that have already opened it can continue to use it.
The terminal is one of those processes.

Deleting any file (including your directory, because directories have
file-like behaviour in this respect, same as every other directory entry)
just removes that file object from its parent directory entries. It does
not destroy the file in any way. That means that no new processes
can access the file, because now there's no normal way to discover
that it exists, because it no longer appears in its parent directory entries.
But any process that already has the file open can continue to use it.

So your directory does not cease to exist until nothing is using it, and
even then it is not destroyed, merely forgotten entirely.

Here's more explanation:
  https://en.wikipedia.org/wiki/Rm_(Unix)#Overview



Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread David Hedlund



On 2024-07-12 00:54, Lawrence Velázquez wrote:

On Thu, Jul 11, 2024, at 6:08 PM, David Hedlund wrote:

 Expected behaviour
When a directory is deleted while the user is inside it, the terminal
should automatically return to the parent directory.

```
user@domain:~/test$ mkdir ~/test && cd ~/test && touch foo && ls
foo
user@domain:~/test$ rm -r ~/test
user@domain:~/$
```

Why do you expect this behavior?  Other shells and utilities typically
do what bash does -- i.e., nothing.

(You're free to argue that bash *should* behave this way, but that's
a feature request, not a bug report.  And having bash automatically
update its working directory based on filesystem changes would open
up its own can of worms.)

Thanks, Lawrence! I found this discussion helpful and believe it would 
be a valuable feature to add. Can I submit this as a feature request?




Re: Local variable can not unset within shell functions

2024-07-11 Thread Robert Elz
Date:Thu, 11 Jul 2024 16:01:20 -0400
From:Chet Ramey 
Message-ID:  <2739cbbc-d44e-423e-869e-f2884c148...@case.edu>

  | The bug in bash-4.2 was that [...]

  | This would have been clearer to see (and more misleading) if the variable
  | x also had a value at the global scope.

If anything, I'd say that the fix does not go far enough.   There is
no conceptual difference (in sh) between any type of unset variables,
whatever other attributes they might have.   All the conceivable variables
that could ever exist are essentially just unset, with no assigned
attributes (unless the -a option is enabled, in which case all those
unset, unknown, variables have the export attribute), until they are
perhaps given one - they remain unset until given a value.

To the implementation they're different, there's no way for the
implementation to remember every possible variable name, with its
non-value and no attributes, nor is there any point, such a thing
can be created whenever it is required to exist, and then destroyed
again later - but that's just an implementation technique, another
would be for variables once created to be retained forever, even if
unset and with no attributes.

But there's no reason to expose any of this to the application, all
unset variables should behave identically - whether they have attributes
or not (except in how the attribute affects them when a value is assigned).

So, I'd say that the current bug is, from the original report here:

wer...@suse.de said:
  | ... for global variables it works as expected.

That should be fixed, and "declare -p var_name_never_seen_before"
should output either

declare -- var_name_never_seen_before

or perhaps, if it has no attributes

unset -v var_name_never_seen_before

If that result is later eval'd, perhaps in a different shell instance,
where that name has also never previously been noted, then whether or
not the shell creates a variable struct for it, or not, is irrelevant
to anything (except implementation convenience) and the behaviour
should be identical.

That is, other than syntactically incorrect variable names, declare -p
should never generate an error.

Further, if "localness" is considered an attribute of a variable (which
isn't how I would implement it, but assuming it is) then surely declare
should have an option to set the local attribute, and declare -p should
generate a command which restores that (just as it does for the export
attribute, the integer attribute, and I assume, others.

kre




Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread Lawrence Velázquez
On Thu, Jul 11, 2024, at 6:08 PM, David Hedlund wrote:
>  Expected behaviour
> When a directory is deleted while the user is inside it, the terminal 
> should automatically return to the parent directory.
>
> ```
> user@domain:~/test$ mkdir ~/test && cd ~/test && touch foo && ls
> foo
> user@domain:~/test$ rm -r ~/test
> user@domain:~/$
> ```

Why do you expect this behavior?  Other shells and utilities typically
do what bash does -- i.e., nothing.

(You're free to argue that bash *should* behave this way, but that's
a feature request, not a bug report.  And having bash automatically
update its working directory based on filesystem changes would open
up its own can of worms.)

-- 
vq



Re: Local variable can not unset within shell functions

2024-07-11 Thread Chet Ramey

On 7/11/24 9:42 AM, Dr. Werner Fink wrote:

Hi,

I've a report that with later bash the following which works in bash-4.2

  x () {
   local x=y
   declare -p x
   echo $x
   unset x
   declare -p x
   echo $x
  }

with

  linux-40cm:~ # x () {
  >   local x=y
  >   declare -p x
  >   echo $x
  >   unset x
  >   declare -p x
  >   echo $x
  >  }
  linux-40cm:~ # x
  declare -- x="y"
  y
  -bash: declare: x: not found
  
but with bash-5.X the reporter sees (and complains)


  sl15sp5:~ # x () {
  >   local x=y
  >   declare -p x
  >   echo $x
  >   unset x
  >   declare -p x
  >   echo $x
  >  }
  sl15sp5:~ # x
  declare -- x="y"
  y
  declare -- x


The variable is unset: it has attributes (local) but has not been assigned
a value.

The idea behind preserving the local attribute is that the variable will
continue to be reported as unset while in the function, but assigning it
a new value will preserve it as a local variable. This is how bash has
behaved since 1995 (bash-2.0).

The bug in bash-4.2 was that it didn't tell you which instance of a
variable the shell would modify if it were subsequently assigned a value.
The local variable still exists, is unset, and would be used by expansions
and assignments, but `declare' would not tell you that.

This would have been clearer to see (and more misleading) if the variable
x also had a value at the global scope.

A discussion that prompted the change starts at

https://lists.gnu.org/archive/html/bug-bash/2013-11/msg0.html

Chet
--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Local variable can not unset within shell functions

2024-07-11 Thread Greg Wooledge
On Thu, Jul 11, 2024 at 15:39:41 -0400, Lawrence Velázquez wrote:
> I won't speculate about the issue, but your subject goes too far.
> The variable really is unset here:
> 
>   % cat /tmp/x.bash
>   x() {
>   local x=y
>   declare -p x
>   echo "x is ${x-unset}"
>   unset x
>   declare -p x
>   echo "x is ${x-unset}"
>   }
> 
>   x
>   % bash /tmp/x.bash
>   declare -- x="y"
>   x is y
>   declare -- x
>   x is unset

It looks like newer versions of bash retain *some* memory of the unset
local variable's name, but not its flags or prior contents.

hobbit:~$ f() { local -i x=0; declare -p x; unset x; declare -p x; }
hobbit:~$ f
declare -i x="0"
declare -- x

Lawrence is spot on with the semantics, though.  The unset variable
behaves exactly like a variable that was never set.

hobbit:~$ g() { local -i x=0; unset x; echo "plus:${x+plus} minus:${x-minus}";}
hobbit:~$ g
plus: minus:minus

So the question is why Werner's associate cares enough about the output
of declare -p to call this a bug, rather than a simple change of internal
implementation.

Is there some actual semantic difference in behavior between bash versions
that we need to be concerned about here?



Re: Local variable can not unset within shell functions

2024-07-11 Thread Lawrence Velázquez
On Thu, Jul 11, 2024, at 9:42 AM, Dr. Werner Fink wrote:
> I've a report that with later bash the following which works in bash-4.2
>
> [...]
>
>  linux-40cm:~ # x () {
>  >   local x=y
>  >   declare -p x
>  >   echo $x
>  >   unset x
>  >   declare -p x
>  >   echo $x
>  >  }
>  linux-40cm:~ # x
>  declare -- x="y"
>  y
>  -bash: declare: x: not found
> 
> but with bash-5.X the reporter sees (and complains)
>
>  sl15sp5:~ # x () {
>  >   local x=y
>  >   declare -p x
>  >   echo $x
>  >   unset x
>  >   declare -p x
>  >   echo $x
>  >  }
>  sl15sp5:~ # x
>  declare -- x="y"
>  y
>  declare -- x
>
> ... for global variables it works as expected.

I won't speculate about the issue, but your subject goes too far.
The variable really is unset here:

% cat /tmp/x.bash
x() {
local x=y
declare -p x
echo "x is ${x-unset}"
unset x
declare -p x
echo "x is ${x-unset}"
}

x
% bash /tmp/x.bash
declare -- x="y"
x is y
declare -- x
x is unset

-- 
vq



Re: Local variable can not unset within shell functions

2024-07-11 Thread alex xmb sw ratchev
On Thu, Jul 11, 2024, 16:34 alex xmb sw ratchev  wrote:

>
>
> On Thu, Jul 11, 2024, 15:42 Dr. Werner Fink  wrote:
>
>> Hi,
>>
>> I've a report that with later bash the following which works in bash-4.2
>>
>>  x () {
>>   local x=y
>>   declare -p x
>>   echo $x
>>   unset x
>>   declare -p x
>>   echo $x
>>  }
>>
>> with
>>
>>  linux-40cm:~ # x () {
>>  >   local x=y
>>  >   declare -p x
>>  >   echo $x
>>  >   unset x
>>  >   declare -p x
>>  >   echo $x
>>  >  }
>>  linux-40cm:~ # x
>>  declare -- x="y"
>>  y
>>  -bash: declare: x: not found
>>
>> but with bash-5.X the reporter sees (and complains)
>>
>>  sl15sp5:~ # x () {
>>  >   local x=y
>>  >   declare -p x
>>  >   echo $x
>>  >   unset x
>>  >   declare -p x
>>  >   echo $x
>>  >  }
>>  sl15sp5:~ # x
>>  declare -- x="y"
>>  y
>>  declare -- x
>>
>
> id aay the v5 is more valid as error got replaced by working code (
> var=foo ; sav=$( declare -p var ) ; .. ; eval "$var" ( or maybe another
> noeval version )
>

sorry , eval "$sav" , not $var

greets
>
> ... for global variables it works as expected.
>>
>> Werner
>>
>> --
>>   "Having a smoking section in a restaurant is like having
>>   a peeing section in a swimming pool." -- Edward Burr
>>
>


Re: Local variable can not unset within shell functions

2024-07-11 Thread alex xmb sw ratchev
On Thu, Jul 11, 2024, 15:42 Dr. Werner Fink  wrote:

> Hi,
>
> I've a report that with later bash the following which works in bash-4.2
>
>  x () {
>   local x=y
>   declare -p x
>   echo $x
>   unset x
>   declare -p x
>   echo $x
>  }
>
> with
>
>  linux-40cm:~ # x () {
>  >   local x=y
>  >   declare -p x
>  >   echo $x
>  >   unset x
>  >   declare -p x
>  >   echo $x
>  >  }
>  linux-40cm:~ # x
>  declare -- x="y"
>  y
>  -bash: declare: x: not found
>
> but with bash-5.X the reporter sees (and complains)
>
>  sl15sp5:~ # x () {
>  >   local x=y
>  >   declare -p x
>  >   echo $x
>  >   unset x
>  >   declare -p x
>  >   echo $x
>  >  }
>  sl15sp5:~ # x
>  declare -- x="y"
>  y
>  declare -- x
>

id aay the v5 is more valid as error got replaced by working code ( var=foo
; sav=$( declare -p var ) ; .. ; eval "$var" ( or maybe another noeval
version )

greets

... for global variables it works as expected.
>
> Werner
>
> --
>   "Having a smoking section in a restaurant is like having
>   a peeing section in a swimming pool." -- Edward Burr
>


Re: proposed BASH_SOURCE_PATH

2024-07-11 Thread konsolebox
On Thu, Jul 11, 2024 at 4:08 AM Chet Ramey  wrote:
> and the BASH_SOURCE
> absolute pathname discussion has been bananas, so that's not going in any
> time soon.

Maybe just create BASH_SOURCE_REAL instead to avoid the gripes.

https://gist.github.com/konsolebox/d9fb2fadd2b8b13d96d0aa7ebea836d9#file-bash-source-real-array-var-patch

This however introduces heavier changes.

I can already see people saying maybe keep this optional.  I don't
like it because you have to store the context directories to be able
to consistently generate the values the moment the option is enabled.

The lazy method I mentioned earlier as well will probably introduce
more code than this for the same reason.  It was only nice in theory.


-- 
konsolebox



Re: proposed BASH_SOURCE_PATH

2024-07-10 Thread Chet Ramey

On 7/7/24 3:34 PM, Greg Wooledge wrote:


At this point, I'm just going to wait and see what gets implemented, and
then figure out how that affects scripts and interactive shells in the
future.


I added -p to ./source, but that's it. It's in the devel branch. There's no
reason to continue discussing BASH_SOURCE_PATH, and the BASH_SOURCE
absolute pathname discussion has been bananas, so that's not going in any
time soon.


--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/




Re: [@]@A weird behaviour when IFS does not contain space

2024-07-10 Thread Emanuele Torre
On Wed, Jul 10, 2024 at 09:24:03AM -0400, Chet Ramey wrote:
> On 7/4/24 2:51 AM, Emanuele Torre wrote:
> > Hello.
> > 
> > Normally, ${foo[@]@A} expands to multiple values, that are the arguments
> > to run a declare command that sets foo to the current value/attributes.
> > 
> >  bash-5.2$ a=( abc xyz 123 ); declare -pa result=("${a[@]@A}")
> >  declare -a result=([0]="declare" [1]="-a" [2]="a=([0]=\"abc\" 
> > [1]=\"xyz\" [2]=\"123\")")
> >  bash-5.2$ a=( abc xyz 123 ); echoargs "${a[@]@A}"
> >  $1='declare'
> >  $2='-a'
> >  $3='a=([0]="abc" [1]="xyz" [2]="123")'
> > 
> > Today, I have noticed that if IFS is set to a value that does not
> > include space, [@]@A will expand to a single value
> 
> OK. Is that a problem? The man page says "when evaluated," and running the
> result through `eval' -- properly quoted -- produces the expected results.
> If there's an issue with making the expansion eval-safe, let's look at that.
> 
> -- 
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
> 

I was curious about why it expands to multiple values if IFS is unset or
is set to a string that contains at least a ' ' character, but it
expands to a single value if IFS is set to any string that does not
contain any ' ' characters.
There is no obvious explanation for that, so that suggests there is
something weird going on with this particular kind of expansion.
"${a[@]@A}"

o/
 emanuele6



Re: [@]@A weird behaviour when IFS does not contain space

2024-07-10 Thread Chet Ramey

On 7/4/24 2:51 AM, Emanuele Torre wrote:

Hello.

Normally, ${foo[@]@A} expands to multiple values, that are the arguments
to run a declare command that sets foo to the current value/attributes.

 bash-5.2$ a=( abc xyz 123 ); declare -pa result=("${a[@]@A}")
 declare -a result=([0]="declare" [1]="-a" [2]="a=([0]=\"abc\" [1]=\"xyz\" 
[2]=\"123\")")
 bash-5.2$ a=( abc xyz 123 ); echoargs "${a[@]@A}"
 $1='declare'
 $2='-a'
 $3='a=([0]="abc" [1]="xyz" [2]="123")'

Today, I have noticed that if IFS is set to a value that does not
include space, [@]@A will expand to a single value


OK. Is that a problem? The man page says "when evaluated," and running the
result through `eval' -- properly quoted -- produces the expected results.
If there's an issue with making the expansion eval-safe, let's look at that.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/




Re: Env var feature request

2024-07-09 Thread Daniel Colascione
Greg Wooledge  writes:

> On Tue, Jul 09, 2024 at 20:14:27 +, Erik Keever wrote:
>> A --debug-envvars flag which will, when passed to bash, catch every
>> time an environment variable is set and print the file/line that is
>> setting it. To restrict it, "--debug-envvars FOO,BAR" to catch only
>> instances of FOO or BAR being set.
>
> It's not *exactly* what you're asking for, but you can get most of
> this by invoking bash in xtrace mode with PS4 set to a custom value:
>
> PS4='+ $BASH_SOURCE:$FUNCNAME:$LINENO:' bash -ilxc : 2>&1 | grep WHATEVER
>
> That will show you where WHATEVER is being set during an interactive
> shell login, for example.  Omit the "l" flag if you want to debug a
> non-login shell instead.
>
> Note that if bash is being run as UID 0, it will ignore PS4 coming from
> the environment, for security reasons.  So, this only works as a non-root
> user.

That's a cool trick. It should be in the manual.



Re: Env var feature request

2024-07-09 Thread Greg Wooledge
On Tue, Jul 09, 2024 at 20:14:27 +, Erik Keever wrote:
> A --debug-envvars flag which will, when passed to bash, catch every time an 
> environment variable is set and print the file/line that is setting it. To 
> restrict it, "--debug-envvars FOO,BAR" to catch only instances of FOO or BAR 
> being set.

It's not *exactly* what you're asking for, but you can get most of
this by invoking bash in xtrace mode with PS4 set to a custom value:

PS4='+ $BASH_SOURCE:$FUNCNAME:$LINENO:' bash -ilxc : 2>&1 | grep WHATEVER

That will show you where WHATEVER is being set during an interactive
shell login, for example.  Omit the "l" flag if you want to debug a
non-login shell instead.

Note that if bash is being run as UID 0, it will ignore PS4 coming from
the environment, for security reasons.  So, this only works as a non-root
user.



Re: waiting for process substitutions

2024-07-09 Thread Zachary Santer
On Tue, Jul 9, 2024 at 6:12 AM Zachary Santer  wrote:
>
> command-1 | tee >( command-2 ) >( command-3 ) >( command-4 )
> wait
>
> The workaround for this not working would of course be named pipes,
> which is somewhat less trivial.

> Bash is already tracking the pids for all child processes not waited
> on, internally. So I imagine it wouldn't be too much work to make that
> information available to the script it's running. Obviously, this is
> moving beyond "make the existing features make more sense," but an
> array of pids of all child processes not waited on would at least
> allow the user to derive pids of what just got forked from a
> comparison of that array before and after a command including multiple
> procsubs. An array variable like what Alex is suggesting, something
> listing all pids resulting from the last pipeline to fork any child
> process in the current shell environment, would be a solution to the
> matter at hand here.
>
> Maybe a single middle-ground array variable, listing the pids of all
> child processes forked (and not waited on) since the last time the
> array variable was referenced, would be more easily implemented. You
> would just have to save the contents of the array variable in a
> variable of your own each time you reference it, if you want to keep
> track of that stuff. Not unreasonable, considering that you already
> have to do that with $!, at least before each time you fork another
> child process.

On the other hand, do funsubs give us the answer here?

shopt -s lastpipe
declare -a pid=()
command-1 | tee >( command-2 ) ${ pid+=( "${!}" ); } >( command-3 ) ${
pid+=( "${!}" ); } >( command-4 ) ${ pid+=( "${!}" ); }
wait -- "${pid[@]}"

That looks obnoxious, and I should probably get Cygwin going and build
bash-5.3-alpha for myself instead of just asking if this would work
and is sane. That could take me 'til the weekend, though.

On Tue, Jul 9, 2024 at 6:12 AM Zachary Santer  wrote:
>
> On Fri, Jul 5, 2024 at 3:16 PM Chet Ramey  wrote:
> >
> > On 7/3/24 8:40 PM, Zachary Santer wrote:
> > >
> > > Hypothetically, it could work like this:
> > > {
> > >commands
> > > } {fd[0]}< <( command-1 )  {fd[1]}< <( command-2 ) {fd[2]}< <( command-3 )
> > > But then again, *I can't get the pids for the processes if I do it this 
> > > way*.

declare -a pid=()
{
  commands
} {fd[0]}< <( command-1 )  ${ pid+=( "${!}" ); } {fd[1]}< <( command-2
) ${ pid+=( "${!}" ); } {fd[2]}< <( command-3 ) ${ pid+=( "${!}" ); }

Do things start breaking?



Re: [PATCH] fix `shopt -u force_fignore' affecting unrelated parts

2024-07-09 Thread Chet Ramey

On 7/7/24 6:55 AM, Koichi Murase wrote:


Bash Version: 5.3
Patch Level: 0
Release Status: alpha

Description:

   The filtering by `shopt -u force_fignore' is also applied to the
   suppression of completons unrelated to FIGNORE, which results in
   strange behaviors in command- and directory-name completions with
   `shopt -u force_fignore'.  This is because `_ignore_completion_names
   ()' (bashline.c) keeps the original completion list with `shopt -u
   force_fignore' regardless of whether the current filtering is that
   by FIGNORE.  The `force_fignore == 0' should take effect only for
   the filtering by FIGNORE.


Thanks for the detailed report and patch.

Chet

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: waiting for process substitutions

2024-07-09 Thread Zachary Santer
On Fri, Jul 5, 2024 at 2:38 PM Chet Ramey  wrote:
>
> On 6/29/24 10:51 PM, Zachary Santer wrote:
>
> so you were then able to wait for each process substitution individually,
> as long as you saved $! after they were created. `wait' without arguments
> would still wait for all process substitutions (procsub_waitall()), but
> the man page continued to guarantee only waiting for the last one.
>
> This was unchanged in bash-5.2. I changed the code to match what the man
> page specified in 10/2022, after
>
> https://lists.gnu.org/archive/html/bug-bash/2022-10/msg00107.html

Is what's being reported there undesirable behavior? It seems like
what one would expect to happen, at least in hindsight, if it's
understood that 'wait' without arguments will in fact wait for all
procsubs. Furthermore, you can work around it pretty trivially:

$ SECONDS=0; ( sleep 2 & wait -- "${!}" ) > >( sleep 5 ); printf
'%s\n' "${SECONDS}"
2

On the other hand, allowing 'wait' without arguments to wait on all
process substitutions would allow my original example to work, in the
case that there aren't other child processes expected to outlive this
pipeline.

command-1 | tee >( command-2 ) >( command-3 ) >( command-4 )
wait

The workaround for this not working would of course be named pipes,
which is somewhat less trivial.

> and that is what is in bash-5.3-alpha.

> > c. If calling wait -n in the middle of all this, whether listing only
> > un-waited-on child process pids or all child process pids, it lists
> > all argument pids as "no such job" and terminates with code 127. This
> > is probably incorrect behavior.
>
> We've discussed this before. `wait -n' waits for the next process to
> terminate; it doesn't look back at processes that have already terminated
> and been added to the list of saved exit statuses. There is code tagged
> for bash-5.4 that allows `wait -n' to look at these exited processes as
> long as it's given an explicit set of pid arguments.

I read through some of that conversation at the time. Seemed like an
obvious goof. Kind of surprised the fix isn't coming to bash 5.3,
honestly.

And why "no such job" instead of "not a child of this shell"?

> So the upshot is that bash should probably manage process substitutions
> even more like other asynchronous processes in that the pid/status pair
> should be saved on the list of saved exit statuses for `wait' to find it,
> and clear that list when `wait' is called without arguments.

Sounds good to me.

On Fri, Jul 5, 2024 at 3:16 PM Chet Ramey  wrote:
>
> On 7/3/24 8:40 PM, Zachary Santer wrote:
> > On Wed, Jul 3, 2024 at 11:21 AM Chet Ramey  wrote:
> >>
> >> Process substitutions are word expansions, with a scope of a single
> >> command, and are not expected to survive their read/write file descriptors
> >> becoming invalid. You shouldn't need to `wait' for them; they're not
> >> true asynchronous processes.
> >
> > They clearly are. The fact that it results from a word expansion is 
> > irrelevant.
>
> They're similar, but they're not jobs. They run in the background, but you
> can't use the same set of job control primitives to manipulate them.
> Their scope is expected to be the lifetime of the command they're a part
> of, not run in the background until they're wanted.

Would there be a downside to making procsubs jobs? The only thing that
comes to mind would be seeing
[2]-  Done<( asynchronous command )
when I can often safely assume that the procsub has ended by the time
the command it's feeding input to has. Does the distinction between
job and non-job asynchronous process have implications when job
control is disabled?

> > Consider my original example:
> > command-1 | tee >( command-2 ) >( command-3 ) >( command-4 )
> >
> > Any nontrivial command is going to take more time to run than it took
> > to be fed its input.
>
> In some cases, yes.
>
> > The idea that no process in a process
> > substitution will outlive its input stream precludes a reading process
> > substitution from being useful.
>
> It depends on whether or not it can cope with its input (in this case)
> file descriptor being invalidated. In some cases, yes, in some cases, no.

When you say "invalidated," are you referring to something beyond the
process in a reading process substitution simply receiving EOF?
Everything should be able to handle that much.

> > And nevermind
> > exec {fd}< <( command )
> > I shouldn't do this?
>
> Sure, of course you can. You've committed to managing the file descriptor
> yourself at this point, like any other file descriptor you open with exec.

But then, if I 'exec {fd}<&-' before consuming all of command's
output, I would expect it to receive SIGPIPE and die, if it hasn't
already completed. And I might want to ensure that this child process
has terminated before the calling script exits.

> > Why should these be different in practice?
> >
> > (1)
> > mkfifo named-pipe
> > child process command < named-pipe &
> > {
> >foreground shell 

Re: waiting for process substitutions

2024-07-08 Thread alex xmb sw ratchev
On Mon, Jul 8, 2024, 22:57 Greg Wooledge  wrote:

> On Mon, Jul 08, 2024 at 22:45:35 +0200, alex xmb sw ratchev wrote:
> > On Mon, Jul 8, 2024, 22:15 Chet Ramey  wrote:
> >
> > > On 7/8/24 4:02 PM, alex xmb sw ratchev wrote:
> > >
> > > > hi , one question about ..
> > > > if a cmd contains more substitions like >( or <( , how to get all $!
> > > > maybe make ${![]} , or is such already .. ?
> > >
> > > You can't. Process substitutions set $!, but you have to have a point
> > > where you can capture that if you want to wait for more than one.
> That's
> > > the whole purpose of this thread.
> > >
> >
> > so no ${![2]} or so ?
> > else i see only half complex start_first stuff
> >
> > anywa .. greets  = ))
>
> Bash has nothing like that, and as far as I know, nobody is planning to
> add it.
>
> If you need to capture all the PIDs of all your background processes,
> you'll have to launch them one at a time.  This may mean using FIFOs
> (named pipes) instead of anonymous process substitutions, in some cases.
>

and there is no process subs in ' jobs ' ? and no adding such either ?

:/

greets  : ))

>


Re: waiting for process substitutions

2024-07-08 Thread Greg Wooledge
On Mon, Jul 08, 2024 at 22:45:35 +0200, alex xmb sw ratchev wrote:
> On Mon, Jul 8, 2024, 22:15 Chet Ramey  wrote:
> 
> > On 7/8/24 4:02 PM, alex xmb sw ratchev wrote:
> >
> > > hi , one question about ..
> > > if a cmd contains more substitions like >( or <( , how to get all $!
> > > maybe make ${![]} , or is such already .. ?
> >
> > You can't. Process substitutions set $!, but you have to have a point
> > where you can capture that if you want to wait for more than one. That's
> > the whole purpose of this thread.
> >
> 
> so no ${![2]} or so ?
> else i see only half complex start_first stuff
> 
> anywa .. greets  = ))

Bash has nothing like that, and as far as I know, nobody is planning to
add it.

If you need to capture all the PIDs of all your background processes,
you'll have to launch them one at a time.  This may mean using FIFOs
(named pipes) instead of anonymous process substitutions, in some cases.



Re: waiting for process substitutions

2024-07-08 Thread alex xmb sw ratchev
On Mon, Jul 8, 2024, 22:15 Chet Ramey  wrote:

> On 7/8/24 4:02 PM, alex xmb sw ratchev wrote:
>
> > hi , one question about ..
> > if a cmd contains more substitions like >( or <( , how to get all $!
> > maybe make ${![]} , or is such already .. ?
>
> You can't. Process substitutions set $!, but you have to have a point
> where you can capture that if you want to wait for more than one. That's
> the whole purpose of this thread.
>

so no ${![2]} or so ?
else i see only half complex start_first stuff

anywa .. greets  = ))

-- 
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>
>


Re: waiting for process substitutions

2024-07-08 Thread Chet Ramey

On 7/8/24 4:02 PM, alex xmb sw ratchev wrote:


hi , one question about ..
if a cmd contains more substitions like >( or <( , how to get all $!
maybe make ${![]} , or is such already .. ?


You can't. Process substitutions set $!, but you have to have a point
where you can capture that if you want to wait for more than one. That's
the whole purpose of this thread.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: waiting for process substitutions

2024-07-08 Thread alex xmb sw ratchev
On Mon, Jul 8, 2024, 21:55 Chet Ramey  wrote:

> On 7/8/24 3:27 PM, Dale R. Worley wrote:
> > Greg Wooledge  writes:
> >> Some scripts use something like this:
> >> ...
> >>  exec > >(command ...) ...
> >
> > I've used that construction quite a few times myself.  I'm not
> > requesting that the resulting process be waitable, but certainly
> > whatever the maintainers design should take into account that this is an
> > important construction for practical purposes.
>
> This isn't going to change; you could already wait for that pid, since
> it sets $!.
>

hi , one question about ..
if a cmd contains more substitions like >( or <( , how to get all $!
maybe make ${![]} , or is such already .. ?

greets

-- 
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>
>


Re: waiting for process substitutions

2024-07-08 Thread Chet Ramey

On 7/8/24 3:27 PM, Dale R. Worley wrote:

Greg Wooledge  writes:

Some scripts use something like this:
...
 exec > >(command ...) ...


I've used that construction quite a few times myself.  I'm not
requesting that the resulting process be waitable, but certainly
whatever the maintainers design should take into account that this is an
important construction for practical purposes.


This isn't going to change; you could already wait for that pid, since
it sets $!.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: waiting for process substitutions

2024-07-08 Thread Dale R. Worley
Greg Wooledge  writes:
> Some scripts use something like this:
> ...
> exec > >(command ...) ...

I've used that construction quite a few times myself.  I'm not
requesting that the resulting process be waitable, but certainly
whatever the maintainers design should take into account that this is an
important construction for practical purposes.

Dale



Re: Comments on bash 5.2's undocumented <((

2024-07-08 Thread Chet Ramey

On 7/5/24 4:38 PM, Emanuele Torre wrote:


More funny things have been discovered since.

It has been brought up when discussing this in the #bash IRC channel of
irc.libera.chat, that if you run   eval '

This is a consequence of using the same code for a number of things: the
same function handles command substitution, process substitution, running
traps, eval, mapfile, source, fc, and other things (basically any time you
take a string and evaluate it); there isn't enough detection of the
specific case of command substitution we're talking about here.



So you can for example use this to concatenate two files in pure bash;
like $(cat 

Yeah, this is clearly not the intent of the feature, but more an accident
of the implementation and the different uses of this function leaking
into each other.


I have also noticed that if you source a file from $() and the last
logical line of that file contains only a simple command with only one
read-only stdin redirection, bash will also print out that file, again,
in all versions of bash with $(<):


I'm not sure how this can be considered anything but a bug, no matter how
long it's existed.

The best thing to do is probably to remove this (legacy) code from that
common function; with the re-implmentation of command substitution parsing
in bash-5.2, the documented usage doesn't go through that code path any
more.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: Comments on bash 5.2's undocumented <((

2024-07-08 Thread Dale R. Worley
Emanuele Torre  writes:
> Yes, clearly that is influencing this new behaviour, but this is new:
> <((

Re: Comments on bash 5.2's undocumented <((

2024-07-08 Thread Chet Ramey

On 7/5/24 2:38 PM, Emanuele Torre wrote:

Bash 5.2 apparently added  <(< file)  that expand to the path to a fifo
(openable only for read on BSD) to which the contents of file are
written to, without documenting it.


It's a side effect of making the internal implementations of command and
process substitution cleaner and more consistent. Don't expect it to
become documented or necessarily even stick around.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: proposed BASH_SOURCE_PATH

2024-07-08 Thread Oğuz
On Mon, Jul 8, 2024 at 11:16 AM Martin D Kealey  wrote:
> The only things that the shell has going for it is that it's widely deployed 
> and stable over the long term.
> Otherwise it's a terrible language, and any sane programmer should avoid it 
> entirely:
> This has already been happening, and Bash is >this< close to become an 
> irrelevant historical footnote.
> If you modify Bash in ways that are not backwards compatible, you're then 
> writing in a new language that no new project is likely to adopt.

These are just your opinions.

> That's what "worth breaking existing code" costs in reality: other people's 
> stuff breaks when they've had zero advance notice, because they aren't the 
> people deciding to upgrade Bash.

This I agree with, but personally I don't think the change we discuss
here is that big.

> PPS: In my opinion, the only hope for Bash to continue to exist in the long 
> term is for it to either:
> (a) absolutely guarantee stability, forsaking all new features; or
> (b) adopt a full suite of features that make it into a "normal" programming 
> language, including: support for modules written for different versions of 
> Bash to safely cohabitate in a single script; lexical scoping with 
> namespaces; being able to store references in variables, including some kinds 
> of handles for filedescriptors, functions, processes, and process groups; 
> some mechanism to perform rewriting during parsing (going well beyond what 
> aliases can do) so that new features can be proposed and implemented in shell 
> before being implemented in the C core. And all of that while not breaking 
> code that doesn't ask for these new features.

You're wasting your breath.



Re: proposed BASH_SOURCE_PATH

2024-07-08 Thread Phi Debian
@Greg, @Martin +1, lost sight of the feature, and xing fingers that current
semantic/behavior is not destroyed, @Oğuz -1 I'd like to avoid fixing
script that run today just because bash was updated or I would advocate
distros to keep a frozen bash as macos did.


Re: proposed BASH_SOURCE_PATH

2024-07-08 Thread alex xmb sw ratchev
On Mon, Jul 8, 2024, 10:16 Martin D Kealey  wrote:

>
>
> On Mon, 8 Jul 2024 at 14:42, Oğuz  wrote:
>
>> On Monday, July 8, 2024, Martin D Kealey  wrote:
>>>
>>> It's not possible to change "${BASH_SOURCE[@]}" without breaking some
>>> existing code,
>>>
>>
>> It's worth breaking existing code in this case.
>>
>
> The only things that the shell has going for it is that it's widely
> deployed and stable over the long term.
>
> Otherwise it's a terrible language, and any sane programmer should avoid
> it entirely:
>
>- its syntax resembles no other language, with fun quirks such as
>intentionally mismatched brackets;
>- its lexical tokenization depend on at least 5 different quoting
>styles;
>- text may or may not be evaluated as a numeric expression, based on
>flags set elsewhere with dynamic duration;
>- text may or may not be split into "words" based on delimiters set
>elsewhere with dynamic duration;
>- text may or may not be globbed into matching filenames, yet again
>depending on a dynamic switch;
>- lifetimes for different kinds of entities are controlled by 3
>different overlapping scoping rules;
>- processes are classified and grouped in arcane ways, leading to the
>current complaints about the lifetime of output command substitutions.
>
> If you take away stability then existing code breaks. When that happens
> enough times, people get fed up and either rewrite the code in another
> language, or completely replace it with a different project. When that
> happens enough, there's no point including Bash in the base set for a
> distro, so it's no longer universally available.
>
> This has already been happening, and Bash is >this< close to become an
> irrelevant historical footnote.
>
> If you modify Bash in ways that are not backwards compatible, you're then
> writing in a new language that no new project is likely to adopt.
>
> which leaves us with some kind of explicit opt-in such as:
>>>
>>
>> `shopt -s compat52' should suffice to opt out of the new default. No
>> point in making it more complicated than that.
>>
>
> That is how we got into the current mess: by assuming that "someone" will
> go around and adjust all the already-deployed scripts, by adding a
> "compatNN" option that did not exist when the script was written.
>
> For example, I have a Ubiquiti ER-X router, as do several of my friends
> and family.
> This device has Bash supplied by the vendor. If the vendor ever pushes a
> future version of Bash with breaking updates, even though they will have
> fixed *their* scripts, my internet connection will die before I find out
> that I need to patch the scripts I've installed in it. And then I have to
> go track down the other people who've installed copies of my scripts, and
> get them to update them (which will be difficult if it has broken their
> internet).
>
> That's what "worth breaking existing code" costs in reality: other
> people's stuff breaks when they've had zero advance notice, because they
> aren't the people deciding to upgrade Bash.
>
> -Martin
>
> PS: this situation would be somewhat ameliorated if it were possible to
> use shopt -s compat$CURRENT_BASH_VERSION, so that it won't need modifying
> to be compatible with a future release of Bash. Having to wait until the
> next version of Bash is released before it can be patched to say what
> version it needs is cruel.
>

good idea i had too , compat current bash ver , to be

PPS: In my opinion, the only hope for Bash to continue to exist in the long
> term is for it to either:
> (a) absolutely guarantee stability, forsaking *all* new features; or
> (b) adopt a full suite of features that make it into a "normal"
> programming language, including: support for modules written for different
> versions of Bash to safely cohabitate in a single script; lexical scoping
> with namespaces; being able to store references in variables, including
> some kinds of handles for filedescriptors, functions, processes, and
> process groups; some mechanism to perform rewriting during parsing (going
> well beyond what aliases can do) so that new features can be proposed and
> implemented in shell before being implemented in the C core. And all of
> that while not breaking code that doesn't ask for these new features.
>


Re: proposed BASH_SOURCE_PATH

2024-07-08 Thread Martin D Kealey
On Mon, 8 Jul 2024 at 14:42, Oğuz  wrote:

> On Monday, July 8, 2024, Martin D Kealey  wrote:
>>
>> It's not possible to change "${BASH_SOURCE[@]}" without breaking some
>> existing code,
>>
>
> It's worth breaking existing code in this case.
>

The only things that the shell has going for it is that it's widely
deployed and stable over the long term.

Otherwise it's a terrible language, and any sane programmer should avoid it
entirely:

   - its syntax resembles no other language, with fun quirks such as
   intentionally mismatched brackets;
   - its lexical tokenization depend on at least 5 different quoting styles;
   - text may or may not be evaluated as a numeric expression, based on
   flags set elsewhere with dynamic duration;
   - text may or may not be split into "words" based on delimiters set
   elsewhere with dynamic duration;
   - text may or may not be globbed into matching filenames, yet again
   depending on a dynamic switch;
   - lifetimes for different kinds of entities are controlled by 3
   different overlapping scoping rules;
   - processes are classified and grouped in arcane ways, leading to the
   current complaints about the lifetime of output command substitutions.

If you take away stability then existing code breaks. When that happens
enough times, people get fed up and either rewrite the code in another
language, or completely replace it with a different project. When that
happens enough, there's no point including Bash in the base set for a
distro, so it's no longer universally available.

This has already been happening, and Bash is >this< close to become an
irrelevant historical footnote.

If you modify Bash in ways that are not backwards compatible, you're then
writing in a new language that no new project is likely to adopt.

which leaves us with some kind of explicit opt-in such as:
>>
>
> `shopt -s compat52' should suffice to opt out of the new default. No point
> in making it more complicated than that.
>

That is how we got into the current mess: by assuming that "someone" will
go around and adjust all the already-deployed scripts, by adding a
"compatNN" option that did not exist when the script was written.

For example, I have a Ubiquiti ER-X router, as do several of my friends and
family.
This device has Bash supplied by the vendor. If the vendor ever pushes a
future version of Bash with breaking updates, even though they will have
fixed *their* scripts, my internet connection will die before I find out
that I need to patch the scripts I've installed in it. And then I have to
go track down the other people who've installed copies of my scripts, and
get them to update them (which will be difficult if it has broken their
internet).

That's what "worth breaking existing code" costs in reality: other people's
stuff breaks when they've had zero advance notice, because they aren't the
people deciding to upgrade Bash.

-Martin

PS: this situation would be somewhat ameliorated if it were possible to use
shopt -s compat$CURRENT_BASH_VERSION, so that it won't need modifying to be
compatible with a future release of Bash. Having to wait until the next
version of Bash is released before it can be patched to say what version it
needs is cruel.

PPS: In my opinion, the only hope for Bash to continue to exist in the long
term is for it to either:
(a) absolutely guarantee stability, forsaking *all* new features; or
(b) adopt a full suite of features that make it into a "normal" programming
language, including: support for modules written for different versions of
Bash to safely cohabitate in a single script; lexical scoping with
namespaces; being able to store references in variables, including some
kinds of handles for filedescriptors, functions, processes, and process
groups; some mechanism to perform rewriting during parsing (going well
beyond what aliases can do) so that new features can be proposed and
implemented in shell before being implemented in the C core. And all of
that while not breaking code that doesn't ask for these new features.


Re: proposed BASH_SOURCE_PATH

2024-07-07 Thread Oğuz
On Monday, July 8, 2024, Martin D Kealey  wrote:
>
> It's not possible to change "${BASH_SOURCE[@]}" without breaking some
> existing code,
>

It's worth breaking existing code in this case.

which leaves us with some kind of explicit opt-in such as:
>

`shopt -s compat52' should suffice to opt out of the new default. No point
in making it more complicated than that.


-- 
Oğuz


Re: proposed BASH_SOURCE_PATH

2024-07-07 Thread Martin D Kealey
On Mon, 8 Jul 2024, 05:23 alex xmb sw ratchev,  wrote:

> i dont get the BASH_SOURCE[n] one
> the point of prefix $PWD/ infront of relative paths is a static part of
> fitting into the first lines of the script , assigning vars
>

That's not the only use case.

Consider where you have a script that uses two independently written
libraries, each comprising a main and number of ancillary files. Each
library is installed in its own directory, but that directory isn't encoded
into the library.

The standard advice would be to add both directories to PATH (or some
stand-in such as BASH_SOURCE_PATH), however remember, these are
independently written libraries, and the same filename could be used for
files in both libraries, or the "main" script.

By far the most straightforward way to avoid this problem is to source
files using paths relative to (the directory containing) the file
containing the "." or "source" statement itself. But there is no fully
general, portable, and reliable ways to do this, since:
* "${BASH_SOURCE[0]}" might be a relative path based on somewhere in PATH
rather than $PWD, or relative to a different $PWD that's been outdated by cd
;
* "${BASH_SOURCE[0]}" might be a symbolic link into a different directory;
* The directory containing any given file might be unreachable from the
root directory (because of filesystem permissions, process restrictions
(SELinux contexts and equivalents on other OSes), version shadowing, mount
shadowing, soft unmounting, mount namespaces, and probably numerous other
reasons I haven't thought of).

While some of these are intractable, Bash itself at least has a better
chance of getting it right than having to embed screeds of boilerplate code
in every "portable" script. (The more portable/reliable the boilerplate
solution is, the larger and more complex it is, and if it involves
realpath, the slower it gets.)

It's not possible to change "${BASH_SOURCE[@]}" without breaking some
existing code, which leaves us with some kind of explicit opt-in such as:
# 1. mark the source command itself
source -r file_in_same_dir.bash

# 2. Change the default behaviour via shopt/-O
#!/bin/bash -Orelsource
source file_in_same_dir.bash

# 3. set all the forward compat options by controlling argv[0]
#!/bin/bash7
source file_in_same_dir.bash

Or else we could use a new variable such as "${BASH_SOURCE_DIR[@]}" to hold
the normalized directories (and they're slightly less work than normalizing
the whole path and then discarding the last component).

Whatever solution is chosen, I would like it to be easier for a script
author to do the right thing than to do the wrong thing. And all the better
if it could quietly fix the myriad scripts out there that assume [[ ${0%/*}
-ef . ]].

-Martin


Re: anonymous pipes in recursive function calls

2024-07-07 Thread Zachary Santer
On Sun, Jul 7, 2024 at 2:44 PM Chet Ramey  wrote:
>
> On 7/1/24 8:08 PM, Zachary Santer wrote:
> >
> > Would still like to know roughly when these issues were resolved.
>
> Why not check the releases -- with patches -- between the two? They're
> all available via git if you don't want to download tarballs.

Completely fair. I am being a bit lazy.

Really need to bite the bullet and switch to Cygwin.



Re: proposed BASH_SOURCE_PATH

2024-07-07 Thread Greg Wooledge
On Sun, Jul 07, 2024 at 21:23:15 +0200, alex xmb sw ratchev wrote:
> hi ..
> i dont get the BASH_SOURCE[n] one
> the point of prefix $PWD/ infront of relative paths is a static part of
> fitting into the first lines of the script , assigning vars
> .. if u cd first then want the old relative path .. no go .. it must be
> done at early codes

By now, we've had many conflicting ideas proposed by different people,
each trying to solve a different problem.  I've long since lost track of
what all of the proposals and concepts were.

At this point, I'm just going to wait and see what gets implemented, and
then figure out how that affects scripts and interactive shells in the
future.



Re: proposed BASH_SOURCE_PATH

2024-07-07 Thread alex xmb sw ratchev
On Sun, Jul 7, 2024, 21:03 Chet Ramey  wrote:

> On 7/3/24 5:32 PM, alex xmb sw ratchev wrote:
> > is this all about adding full path ? to sourcr / . ?
> > for this may add one varname , like BASH_SOURCE_FULL
> >
> > it seems to me , using BASH_SOURCE , if it doesnt start with / , prefix
> > $PWD , .. else its already
>
> The objection is that people don't want to have to do that -- what if you
> change directories after a relative path is added to BASH_SOURCE[n]?
>

hi ..
i dont get the BASH_SOURCE[n] one
the point of prefix $PWD/ infront of relative paths is a static part of
fitting into the first lines of the script , assigning vars
.. if u cd first then want the old relative path .. no go .. it must be
done at early codes

one argument is , preserving one to one strings as user passes including
relative paths like spawn script name

.. while also the need for absolute path is not less inportant
i see another var added . BASH_SOURCE_ABSOLUTE , or _REAL , or so , which
bash will fill the path
.. a question is still what to do with symlinks , maybe additional code is
needed

thanks .. greets ..

-- 
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>
>


Re: proposed BASH_SOURCE_PATH

2024-07-07 Thread Chet Ramey

On 7/3/24 5:32 PM, alex xmb sw ratchev wrote:

is this all about adding full path ? to sourcr / . ?
for this may add one varname , like BASH_SOURCE_FULL

it seems to me , using BASH_SOURCE , if it doesnt start with / , prefix 
$PWD , .. else its already


The objection is that people don't want to have to do that -- what if you
change directories after a relative path is added to BASH_SOURCE[n]?

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: anonymous pipes in recursive function calls

2024-07-07 Thread Chet Ramey

On 7/1/24 8:08 PM, Zachary Santer wrote:


My repeat-by does elicit both behaviors in bash 4.2. The nested
anonymous pipe (line 17) was necessary to get the diagnostic message.
All this seems to be fixed by the time we get to bash 5.2. I've
attached the repeat-by script, in case it's useful. Would still like
to know roughly when these issues were resolved.


Why not check the releases -- with patches -- between the two? They're
all available via git if you don't want to download tarballs.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: 5.2.15(1)-release: RETURN trap not executed in cmd subst if last statement of fct is not builtin

2024-07-06 Thread alex xmb sw ratchev
maybe its scope ..
1st define trap 2 define and run func ..

On Sat, Jul 6, 2024, 15:02 Emanuele Torre  wrote:

> On Sat, Jul 06, 2024 at 01:41:01PM +0200, Jens Schmidt wrote:
> > Today I came across this one:
> >
> >   [copy-on-select-2]$ echo $BASH_VERSION
> >   5.2.15(1)-release
> >   [copy-on-select-2]$ foo() { trap 'echo foo' RETURN; /bin/echo bar; }
> >   [copy-on-select-2]$ foo
> >   bar
> >   foo
> >   [copy-on-select-2]$ baz=$( foo )
> >   [copy-on-select-2]$ echo "<$baz>"
> >   
> >
> > I guess that Bash execve's the "/bin/echo" since it runs in a command
> > substitution and the trap does not get executed, hence.
> >
> > As a work-around, one can tweak function foo so that the last command
> > is not an external one, like this:
> >
> >   [copy-on-select-2]$ foo() { trap 'echo foo' RETURN; /bin/echo bar;
> return $?; }
> >   [copy-on-select-2]$ foo
> >   bar
> >   foo
> >   [copy-on-select-2]$ baz=$( foo )
> >   [copy-on-select-2]$ echo "<$baz>"
> >>   foo>
> >
> > Would this be a bug?
> >
> > Should I report it separately on bug-bash?
> >
> > Thanks!
> >
>
> This is also reproducible on 5.2.26 and the devel branch:
>
> bash-5.3$ foo() { trap 'echo foo' RETURN; /bin/echo bar; }
> bash-5.3$ baz=$( foo )
> bash-5.3$ declare -p baz
> declare -- baz="bar"
>
> bash normally disables the exec optimisation if there are EXIT traps,
> but apparently it does not do it if there are RETURN traps in a
> function.
>
> CC+=bug-bash@gnu.org since this is a bug report
>
> o/
>  emanuele6
>
>


Re: 5.2.15(1)-release: RETURN trap not executed in cmd subst if last statement of fct is not builtin

2024-07-06 Thread Emanuele Torre
On Sat, Jul 06, 2024 at 01:41:01PM +0200, Jens Schmidt wrote:
> Today I came across this one:
> 
>   [copy-on-select-2]$ echo $BASH_VERSION
>   5.2.15(1)-release
>   [copy-on-select-2]$ foo() { trap 'echo foo' RETURN; /bin/echo bar; }
>   [copy-on-select-2]$ foo
>   bar
>   foo
>   [copy-on-select-2]$ baz=$( foo )
>   [copy-on-select-2]$ echo "<$baz>"
>   
> 
> I guess that Bash execve's the "/bin/echo" since it runs in a command
> substitution and the trap does not get executed, hence.
> 
> As a work-around, one can tweak function foo so that the last command
> is not an external one, like this:
> 
>   [copy-on-select-2]$ foo() { trap 'echo foo' RETURN; /bin/echo bar; return 
> $?; }
>   [copy-on-select-2]$ foo
>   bar
>   foo
>   [copy-on-select-2]$ baz=$( foo )
>   [copy-on-select-2]$ echo "<$baz>"
>  foo>
> 
> Would this be a bug?
> 
> Should I report it separately on bug-bash?
> 
> Thanks!
> 

This is also reproducible on 5.2.26 and the devel branch:

bash-5.3$ foo() { trap 'echo foo' RETURN; /bin/echo bar; }
bash-5.3$ baz=$( foo )
bash-5.3$ declare -p baz
declare -- baz="bar"

bash normally disables the exec optimisation if there are EXIT traps,
but apparently it does not do it if there are RETURN traps in a
function.

CC+=bug-bash@gnu.org since this is a bug report

o/
 emanuele6



Re: Comments on bash 5.2's undocumented <((

2024-07-05 Thread Emanuele Torre
On Fri, Jul 05, 2024 at 10:38:59PM +0200, Emanuele Torre wrote:
> Yes, clearly that is influencing this new behaviour, but this is new:
> <(((

Re: Comments on bash 5.2's undocumented <((

2024-07-05 Thread Emanuele Torre
On Fri, Jul 05, 2024 at 04:10:55PM -0400, Dale R. Worley wrote:
> Emanuele Torre  writes:
> > Bash 5.2 apparently added  <(< file)  that expand to the path to a fifo
> > (openable only for read on BSD) to which the contents of file are
> > written to, without documenting it.
> 
> I suspect that this is a consequence of
> 
>The  com‐
>mand  substitution  $(cat  file)  can be replaced by the equivalent but
>faster $(< file).
> 

Yes, clearly that is influencing this new behaviour, but this is new:
<(( (Which oddly enough, I suggested for Bash some decades ago.)  In the
> latter form, Bash doesn't set up a subprocess and then read the pipe
> from it but instead just reads the named file.  Or rather, that was the
> initial implementation.  I suspect that the code has been updated so
> that an "inner command" of "< file" now copies "file" to stdout (as if
> it was cat), and the various results you see are based on what the
> parent process does with that output.

More funny things have been discovered since.

It has been brought up when discussing this in the #bash IRC channel of
irc.libera.chat, that if you run   eval ' f
bash-5.2$ echo ih > g
bash-5.2$ x=$(eval ' file
bash-5.2$ { _=$(eval '< file' >&3) ;} 3<&1
hi
hellobash-5.2$ 

I have also noticed that if you source a file from $() and the last
logical line of that file contains only a simple command with only one
read-only stdin redirection, bash will also print out that file, again,
in all versions of bash with $(<):

bash-5.2$ printf hello > myfile
bash-5.2$ cat <<'EOF' > tosource
echo foo
< myfile
EOF
bash-5.2$ x=$(. ./tosource; echo hey)
bash-5.2$ declare -p x
declare -- x=$'foo\nhellohey'

> 
> Dale

o/
 emanuele6



Re: Comments on bash 5.2's undocumented <((

2024-07-05 Thread Dale R. Worley
Emanuele Torre  writes:
> Bash 5.2 apparently added  <(< file)  that expand to the path to a fifo
> (openable only for read on BSD) to which the contents of file are
> written to, without documenting it.

I suspect that this is a consequence of

   The  com‐
   mand  substitution  $(cat  file)  can be replaced by the equivalent but
   faster $(< file).

(Which oddly enough, I suggested for Bash some decades ago.)  In the
latter form, Bash doesn't set up a subprocess and then read the pipe
from it but instead just reads the named file.  Or rather, that was the
initial implementation.  I suspect that the code has been updated so
that an "inner command" of "< file" now copies "file" to stdout (as if
it was cat), and the various results you see are based on what the
parent process does with that output.

Dale



Re: waiting for process substitutions

2024-07-05 Thread Greg Wooledge
On Fri, Jul 05, 2024 at 15:16:31 -0400, Chet Ramey wrote:
> They're similar, but they're not jobs. They run in the background, but you
> can't use the same set of job control primitives to manipulate them.
> Their scope is expected to be the lifetime of the command they're a part
> of, not run in the background until they're wanted.

Some scripts use something like this:

#!/bin/bash
exec > >(tee /some/logfile) 2>&1
logpid=$!

...

exec >&-
wait "$logpid"

Your expectations might be different from those of bash's users.



Re: waiting for process substitutions

2024-07-05 Thread Chet Ramey

On 7/3/24 8:40 PM, Zachary Santer wrote:

On Wed, Jul 3, 2024 at 11:21 AM Chet Ramey  wrote:


Process substitutions are word expansions, with a scope of a single
command, and are not expected to survive their read/write file descriptors
becoming invalid. You shouldn't need to `wait' for them; they're not
true asynchronous processes.


They clearly are. The fact that it results from a word expansion is irrelevant.


They're similar, but they're not jobs. They run in the background, but you
can't use the same set of job control primitives to manipulate them.
Their scope is expected to be the lifetime of the command they're a part
of, not run in the background until they're wanted.



Consider my original example:
command-1 | tee >( command-2 ) >( command-3 ) >( command-4 )

Any nontrivial command is going to take more time to run than it took
to be fed its input.


In some cases, yes.


The idea that no process in a process
substitution will outlive its input stream precludes a reading process
substitution from being useful.


It depends on whether or not it can cope with its input (in this case)
file descriptor being invalidated. In some cases, yes, in some cases, no.


And nevermind
exec {fd}< <( command )
I shouldn't do this?


Sure, of course you can. You've committed to managing the file descriptor
yourself at this point, like any other file descriptor you open with exec.



To me, a process substitution is just a way to avoid the overhead of
creating named pipes.


They're not (often) named pipes. What they are is a way to expose an
anonymous pipe in the file system. That may be close to what a named
pipe is, but a /dev/fd filename is not a named pipe (and has some
annoyingly system-specific differences).



Why should these be different in practice?

(1)
mkfifo named-pipe
child process command < named-pipe &
{
   foreground shell commands
} > named-pipe

(2)
{
   foreground shell commands
} > >( child process command )


Because you create a job with one and not the other, explicitly allowing
you to manipulate `child' directly?



In my actual use cases, I have:

(1)
A couple different scripts that alternate reading from multiple
different processes, not entirely unlike
sort -- <( command-1 ) <( command-2 ) <( command-3 )
except it's using exec and automatic fds.

Hypothetically, it could work like this:
{
   commands
} {fd[0]}< <( command-1 )  {fd[1]}< <( command-2 ) {fd[2]}< <( command-3 )
But then again, *I can't get the pids for the processes if I do it this way*.


If you have to get the pids for the individual processes, do it a different
way. That's just not part of what process substitutions provide: they are
word expansions that expand to a filename. If the semantics make it more
convenient for you to use named pipes, then use named pipes.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: waiting for process substitutions

2024-07-05 Thread Chet Ramey

On 7/2/24 9:59 PM, Zachary Santer wrote:


I *am* seeing a difference between having lastpipe enabled (and job
control off) or not when running your example in the interactive
shell, though:
SECONDS=0; echo $'foo\nbar' | tee >(echo first ; exit 1) >(wc ; sleep
10 ; echo wc) >(tail -n 1; echo tail); wait; printf '%s\n'
"SECONDS=${SECONDS}"

With lastpipe disabled, wait exits immediately.


There are no process substitutions or asynchronous processes for `wait'
to return, since the last pipeline element containing the procsubs is
executed in a child process.


With lastpipe enabled,
it does seem to wait for everything.


The last pipeline element is executed in the current environment, and
any side effects persist in the current environment, so you have process
substitutions for `wait' to return.

I didn't look at the script behavior, since you don't have any pipelines
where lastpipe might have an effect.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: waiting for process substitutions

2024-07-05 Thread Chet Ramey

On 6/29/24 10:51 PM, Zachary Santer wrote:


They might already. Now I'm wondering if the documentation just needed updating.


It might, but maybe not how you think. See below.



I'm afraid to report this as a bug, because it feels like something
that running bash in MSYS2 on Windows could be responsible for, but
here goes.


No, it's not MSYS-specific.


Bash Version: 5.2
Patch Level: 26
Release Status: release

Description:

So bash can wait on process substitutions.

1) When all child processes are process substitutions:
a. wait without arguments actually appears to wait for all of them,
not just the last-executed one, contradicting the man page.


This is how it is in bash-5.2. It does contradict the man page, and I
reverted it to match the man page in October, 2022, after

https://lists.gnu.org/archive/html/bug-bash/2022-10/msg00107.html

The whole business is kind of a mess. Let's see if we can figure out the
best behavior.

Through bash-4.4, `wait' without arguments didn't wait for process
substitutions at all. I changed bash in 12/2017 to wait for all procsubs
in addition to explicitly waiting for the last process substitution if
its pid was the same as $!, mostly the result of this discussion:

https://lists.gnu.org/archive/html/bug-bash/2017-12/msg2.html

and that change was in bash-5.0. The man page didn't change until 06/2019.

There were some bug reports about this behavior, e.g.,

https://lists.gnu.org/archive/html/bug-bash/2019-06/msg0.html

In bash-5.1 I added code to treat process substitutions more like jobs,
even though they're not, in response to this use case:

https://lists.gnu.org/archive/html/bug-bash/2019-09/msg00021.html

so you were then able to wait for each process substitution individually,
as long as you saved $! after they were created. `wait' without arguments
would still wait for all process substitutions (procsub_waitall()), but
the man page continued to guarantee only waiting for the last one.

This was unchanged in bash-5.2. I changed the code to match what the man
page specified in 10/2022, after

https://lists.gnu.org/archive/html/bug-bash/2022-10/msg00107.html

and that is what is in bash-5.3-alpha.

The other thing that wait without arguments does is clear the list of
remembered asynchronous pids and statuses. But process substitutions
don't count against this (they probably should), so you've waited for
the process substitutions and remembered their statuses. Bash should
probably clear the list of reaped process substitutions before wait
without arguments returns. But it doesn't do that yet.



b. A subsequent call to wait listing all child process pids
immediately terminates successfully.


See above. `wait' finds the pids and exit statuses in the list of
reaped procsubs. This is probably wrong: they should probably be cleared
when `wait' runs without arguments,


c. If calling wait -n in the middle of all this, whether listing only
un-waited-on child process pids or all child process pids, it lists
all argument pids as "no such job" and terminates with code 127. This
is probably incorrect behavior.


We've discussed this before. `wait -n' waits for the next process to
terminate; it doesn't look back at processes that have already terminated
and been added to the list of saved exit statuses. There is code tagged
for bash-5.4 that allows `wait -n' to look at these exited processes as
long as it's given an explicit set of pid arguments.


2) When a standard background process is added:
a. wait without arguments waits for all child processes.


Same.


b. A subsequent call to wait listing all child process pids lists all
argument pids as not children of the shell and terminates with code
127. This seems incorrect, or at least the change in behavior from 1b.
is unexpected.


It's different. The reaped background process triggers the cleanup of
all background processes, including cleaning out the list of reaped
procsubs. This is what doesn't happen, but probably should, in case 1.


c. If calling wait -n in the middle of all this, we see that it only
lists the pids from process substitutions as "no such job".


Same.

So the upshot is that bash should probably manage process substitutions
even more like other asynchronous processes in that the pid/status pair
should be saved on the list of saved exit statuses for `wait' to find it,
and clear that list when `wait' is called without arguments.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: [@]@A weird behaviour when IFS does not contain space

2024-07-05 Thread Emanuele Torre
On Thu, Jul 04, 2024 at 09:08:21PM -0400, Dale R. Worley wrote:
> Emanuele Torre  writes:
> > [...]
> > Today, I have noticed that if IFS is set to a value that does not
> > include space, [@]@A will expand to a single value
> > [...]
> > As an aside, [*]@A always expands to the declare command joined by
> > space, even if the first character of IFS is not space; I think that is
> > a bit confusing, and surprising, but maybe that is done intentionally:
> > "intended and undocumented"(?).
> 
> IMHO, the second observation is what should happen:  The construct
> "${a[*]@A}", like almost all variable expansions, produces a *character
> string*, and then the characters are parsed into words and interpreted.
> In this case, the string contains spaces between the characters that are
> generated for each array member.  But if IFS doesn't contain a space,
> then that string of characters isn't split into multiple words.
> 
> Although perhaps "${a[*]@A}" should operate like "${a[*]}" does
> (according to my very old man page):
> 
>If
>the word is double-quoted, ${name[*]} expands to a single word with the
>value of each array member separated by the first character of the IFS
>special variable, [...]
> 
> That is, the blocks of the result string should be separated by the
> first character of IFS.
> 

I don't really understand what you are trying to say here; anyway what I
meant was just that ${a[*]@A} always expands to 'declare -a foo=(bar)'
even though "${a[@]@A}" expands to 'declare' '-a' 'foo=(bar)'; it does
not expand to  'declare-afoo=(bar)' (for IFS=) or
'declarez-azfoo=(bar)' (for IFS=z). I find that surprising.

> The first case is more complicated.  The man page says for "${a[@]}":
> 
>[...] ${name[@]} expands each element of name to a sep‐
>arate word.
> 
> This is the one case where the results of a variable expansion can't be
> modeled simply as replacing the variable reference with a string of
> characters (which are then parsed).  It suggests that "${a[@]@A}" should
> automagically generate a separate word for each element of the array a,
> regardless of the value of IFS.
> 
> Dale

No, that is not how ${[@]@A} should work since you are supposed to run
it as a command  ( "${foo[@]@A}" ).  Anyway, [@] should not be
influenced by IFS. This is obviously a bug.

Whether the intended behaviour was:
* to make both "${a[*]@A}" and "${a[@]@A}" always expand to 'declare -a
  foo=(bar)' regardless of IFS, and you are actually supposed to use the
  result as  eval -- "${a[@]@A}"
* to make "${a[@]@A}" expand to separate values as it is doing now (so
  that "${a[@]@A}" works as a command)
* to make "${a[*]@A}" always expand as space concatenated results of
  "${a[@]@A}"
* to make "${a[*]@A}" always expand as IFS concatenated results of
  "${a[@]@A}"

I don't know; but definitely that IFS=z makes "${a[@]@A}" expand how
"${a[*]@A}" is currently expanding is not intended.

o/
 emanuele6



Re: waiting for process substitutions

2024-07-04 Thread Dale R. Worley
Reading this discussion, I notice a subtlety.  If you execute:

$ command-A >( command-1) <( command-2 )
$ command-B

when command-B executes, command-2 must have terminated already because
command-A wouldn't have seen the EOF from command-2 until command-2
terminated.  (OK, I am assuming here that command-A did read its second
argument until EOF, and that's not guaranteed.)  But there's no
guarantee that command-1 has terminated; all that command-B can depend
on is that EOF was *sent* to command-1.

However, the documentation talks of $! possibly being the PID of
command-1 etc., but my (old) manual page doesn't describe how $! could
be set to be the PID of command-1, or even how a script could determine
the PID of command-1 in order to set $! to that number.  (Although it
does describe that if $! is the PID of command-1, then "wait without id"
will wait for $!.)

Dale



Re: [@]@A weird behaviour when IFS does not contain space

2024-07-04 Thread Dale R. Worley
Emanuele Torre  writes:
> [...]
> Today, I have noticed that if IFS is set to a value that does not
> include space, [@]@A will expand to a single value
> [...]
> As an aside, [*]@A always expands to the declare command joined by
> space, even if the first character of IFS is not space; I think that is
> a bit confusing, and surprising, but maybe that is done intentionally:
> "intended and undocumented"(?).

IMHO, the second observation is what should happen:  The construct
"${a[*]@A}", like almost all variable expansions, produces a *character
string*, and then the characters are parsed into words and interpreted.
In this case, the string contains spaces between the characters that are
generated for each array member.  But if IFS doesn't contain a space,
then that string of characters isn't split into multiple words.

Although perhaps "${a[*]@A}" should operate like "${a[*]}" does
(according to my very old man page):

   If
   the word is double-quoted, ${name[*]} expands to a single word with the
   value of each array member separated by the first character of the IFS
   special variable, [...]

That is, the blocks of the result string should be separated by the
first character of IFS.

The first case is more complicated.  The man page says for "${a[@]}":

   [...] ${name[@]} expands each element of name to a sep‐
   arate word.

This is the one case where the results of a variable expansion can't be
modeled simply as replacing the variable reference with a string of
characters (which are then parsed).  It suggests that "${a[@]@A}" should
automagically generate a separate word for each element of the array a,
regardless of the value of IFS.

Dale



Re: printf fails in version 5.2.026-3

2024-07-04 Thread Sam James
Daniel Lublin  writes:

> Apparently the patch needed to fix this was the one Florian Weimer
> posted in november 2023, in "C compatibility issue in the configure
> script" <8734oqnlou@gentoo.org> (if you don't have the mail history:
> https://lists.gnu.org/archive/html/bug-bash/2023-11/msg00104.html)
>
> Maybe some compiler change triggered this now.

Yes, GCC 14 makes various things stricter - for the better - and this
can affect configure scripts.

Arch didn't, as far as I know, do a mass-rebuild when adding GCC 14,
which means these issues now only show up when a package gets updated
for unrelated reasons. I consider this to not have been the ideal
strategy. We did try to advertise what people should do, but you can
only shout so much.



Re: printf fails in version 5.2.026-3

2024-07-04 Thread Daniel Lublin
The message ID with the patch in question was of course
<87leasmvoo@oldenburg.str.redhat.com>

-- 
Daniel
lublin.se



Re: printf fails in version 5.2.026-3

2024-07-04 Thread Daniel Lublin
Apparently the patch needed to fix this was the one Florian Weimer
posted in november 2023, in "C compatibility issue in the configure
script" <8734oqnlou@gentoo.org> (if you don't have the mail history:
https://lists.gnu.org/archive/html/bug-bash/2023-11/msg00104.html)

Maybe some compiler change triggered this now.

-- 
Daniel
lublin.se



Re: waiting for process substitutions

2024-07-03 Thread Zachary Santer
On Wed, Jul 3, 2024 at 11:21 AM Chet Ramey  wrote:
>
> Process substitutions are word expansions, with a scope of a single
> command, and are not expected to survive their read/write file descriptors
> becoming invalid. You shouldn't need to `wait' for them; they're not
> true asynchronous processes.

They clearly are. The fact that it results from a word expansion is irrelevant.

Consider my original example:
command-1 | tee >( command-2 ) >( command-3 ) >( command-4 )

Any nontrivial command is going to take more time to run than it took
to be fed its input. The idea that no process in a process
substitution will outlive its input stream precludes a reading process
substitution from being useful.

And nevermind
exec {fd}< <( command )
I shouldn't do this?

To me, a process substitution is just a way to avoid the overhead of
creating named pipes.

Why should these be different in practice?

(1)
mkfifo named-pipe
child process command < named-pipe &
{
  foreground shell commands
} > named-pipe

(2)
{
  foreground shell commands
} > >( child process command )

In my actual use cases, I have:

(1)
A couple different scripts that alternate reading from multiple
different processes, not entirely unlike
sort -- <( command-1 ) <( command-2 ) <( command-3 )
except it's using exec and automatic fds.

Hypothetically, it could work like this:
{
  commands
} {fd[0]}< <( command-1 )  {fd[1]}< <( command-2 ) {fd[2]}< <( command-3 )
But then again, *I can't get the pids for the processes if I do it this way*.

( 2 )
shopt -s lastpipe
exec {fd}> >( command-2 )
command-1 |
  while [...]; do
[...]
if [[ ${something} == 'true' ]]; then
  printf '%s\x00' "${var}" >&"${fd}"
fi
  done
#
exec {fd}>&-

This whole arrangement is necessary because I need what's going on in
the while loop to be in the parent shell if I'm going to use coproc
fds directly. What's going on in the process substitution will more or
less only begin when the fd it's reading from is closed, because it
involves at least one call to xargs.

But, theoretically, process substitutions shouldn't even allow for
these things? Why not give us a 'pipe' builtin at that point?



  1   2   3   4   5   6   7   8   9   10   >