Re: Examples of concurrent coproc usage?

2024-06-11 Thread Zachary Santer
On Mon, Jun 10, 2024 at 1:07 PM Robert Elz  wrote:
>
> The next POSIX will include O_CLOFORK and FD_CLOFORK (or names
> similar to those) for open (etc) and fcntl(FDFLAGS)) - that is
> analogs of O_CLOEXEC and FD_CLOEXEC but applying to fork() rather
> than exec*().

Well there you go. Once people are satisfied with the extent to which
O_CLOFORK and FD_CLOFORK are present in operating systems, the coproc
keyword could simply apply that fd flag to the fds it creates. A
builtin 'fdflags' or similar could then turn it off for those fds or
on for any arbitrary fd.

I'd be willing to wait for that. It seems like a really elegant solution.

Zack



feature suggestion: ability to expand a set of elements of an array or characters of a scalar, given their indices

2024-06-11 Thread Zachary Santer
Was "bash tries to parse comsub in quoted PE pattern"

On Wed, Oct 18, 2023 at 8:19 AM Zachary Santer  wrote:
>
> In Bash 5.2:
> $ array=( zero one two three four five six )
> $ printf '%s\n' "${array["{2..6..2}"]}"
> two
> four
> six
> $ printf '%s\n' "${array[{2..6..2}]}"
> -bash: {2..6..2}: syntax error: operand expected (error token is "{2..6..2}")
> $ printf '%s\n' "${array["2 4 6"]}"
> -bash: 2 4 6: syntax error in expression (error token is "4 6")
> $ printf '%s\n' "${array[2 4 6]}"
> -bash: 2 4 6: syntax error in expression (error token is "4 6")
> $ printf '%s\n' "${array[2,4,6]}"
> six
> $ indices=(2 4 6)
> $ printf '%s\n' "${array[${indices[@]}]}"
> -bash: 2 4 6: syntax error in expression (error token is "4 6")
> $ printf '%s\n' "${array[${indices[*]}]}"
> -bash: 2 4 6: syntax error in expression (error token is "4 6")

My mind returns to this nonsense, as I find a use for it.

Imagine this functionality:
$ array=( zero one two three four five six )
$ printf '%s\n' "${array[@]( 1 5 )}"
one
five
$ printf '%s\n' "${array[*]( 1 5 )}"
one five
$ indices_array=( 6 2 )
$ printf '%s\n' "${array[@]( "${indices_array[@]}" )}"
six
two
$ indices_scalar='-7 -4'
$ printf '%s\n' "${array[@]( ${indices_scalar} )}"
zero
three
$ scalar='0123456'
$ printf '%s\n' "${scalar( 1 5 )}"
15
$ printf  '%s\n' "${scalar( "${indices_array[@]}" )}"
62
$ printf '%s\n' "${scalar( ${indices_scalar} )}"
03

The (  ) within the parameter expansion would be roughly analogous to
the right hand side of a compound assignment statement for an indexed
array. The values found therein would be taken as the indices of array
elements or characters to expand. Trying to set indices for the
indices, i.e. "${array[@]( [10]=1 [20]=5 )}", wouldn't make any sense,
though, so not quite the same construct.

This could be useful with associative arrays as well, unlike
"${assoc[@]:offset:length}".

I've repeatedly found myself in situations where I had to construct a
whole new array out of not-necessarily-contiguous elements of another
array, just to be able to expand that array somewhere. It would've
been nicer to just use a set of indices directly. I'm now in a
situation where I already have the set of indices and I have to loop
over them to construct the array I need.

I present this as also applying to characters within a scalar
variable, just to be consistent with ${var:offset:length}, which
applies to both scalars and arrays. Maybe that could be useful. I
don't know.

Does this functionality seem valuable to others?

Sorry for being such an ideas guy.

Zack



Re: Examples of concurrent coproc usage?

2024-06-10 Thread Zachary Santer
On Sat, Jun 8, 2024 at 9:38 PM Martin D Kealey  wrote:
>
>
> On Wed, 10 Apr 2024 at 03:58, Carl Edquist  wrote:
>>
>> Note the coproc shell only does this with pipes; it leaves other user
>> managed fds like files or directories alone.
>>
>> I have no idea why that's the case, and i wonder whether it's intentional
>> or an oversight.
>
>
> Simply closing all pipes is definitely a bug.
>
> This is starting to feel like we really need explicit ways to control 
> attributes on filedescriptors.
>
> It should be possible to arrange so that any new subshell will keep 
> "emphemal" filedescriptors until just before invoking a command.
>
> One mechanism would be to add two new per-fd attributes: inherited-by-fork, 
> and ephemeral.
>
> The inherited-by-fork attribute would be set on any fd that's carried through 
> a fork (especially the implicit fork to create a pipeline) and reset on any 
> fd that's the result of a redirection.
>
> The emphemal attribute is set on any coproc fd (or at least, any that's a 
> pipe to the stdin of a coproc).
>
> Then when both attributes are set on an fd, it would be closed just before 
> launching any inner command, after any redirections have been done.That way 
> we could simply turn off the close-after-fork attribute on a coproc fd if 
> that's desired, but otherwise avoid deadlocks in the simple cases.

I think this sort of mirrors or extends my nosub builtin idea[1],
which didn't make it into Chet's folder of ideas[2].

Has an example loadable command[3] ever graduated to being a bash
builtin? 'asort' and 'csv' sound very helpful, but building a loadable
builtin is going to be a hurdle for a lot of people, and not all
systems support dynamic loading.

For 'fdflags' to become a builtin, some things that bash does behind
the scenes would have to change.[4] I assume the coproc fd behavior of
being closed after most forks is not handled through a system fd flag,
though if there is a system fd flag that would cause an fd to
automatically be closed in any forked child process, that would
simplify also closing coproc fds in process substitutions. Heck, bash
could use its own internal fork() function that calls the system
fork() function and then also closes all coproc fds if it finds itself
in the child process. And then just call that from anywhere that calls
the system fork() now.

Point being, maybe the solution would be an extended fdflags builtin
that takes additional arguments not corresponding to system fd flags
but used to toggle bash's behavior with respect to fds.

[1]: https://lists.gnu.org/archive/html/bug-bash/2024-04/msg00087.html
[2]: https://lists.gnu.org/archive/html/bug-bash/2024-05/msg00371.html
[3]: https://git.savannah.gnu.org/cgit/bash.git/tree/examples/loadables
[4]: https://lists.gnu.org/archive/html/bug-bash/2024-02/msg00194.html



Re: REQUEST - bash floating point math support

2024-06-06 Thread Zachary Santer
On Wed, Jun 5, 2024 at 4:08 PM Robert Elz  wrote:
>
> That's a perfect case for scaled integers - no-one ever deals with
> fractions of cents in this kind of thing (a bank won't ever tell you
> that your balance is $5678.17426 for example, even if the interest
> calculations computed accurately might arrive at that number.)

Additionally, floating-point arithmetic introduces rounding errors
that are *very undesirable* in a financial context. There, you will
likely see fixed-point arithmetic, which is effectively the same thing
as scaled integers.



REQUEST - bash floating point math support

2024-06-05 Thread Zachary Santer
On Tue, Jun 4, 2024 at 4:01 PM Saint Michael  wrote:
>
> >
> > It's time to add floating point variables and math to bash.
>
> It just makes so much easier to solve business problems without external
> calls to bc or Python.
> Please let's overcome the "shell complex". Let's treat bash a real language.

You want to expound on use cases?

I've seen one, for myself: writing a script to bind keyboard shortcuts
to, so I could change the level of screen magnification in GNOME by
increments of less than 100%. The magnification factor is handled as a
fractional number - 1.5, 1.75, etc. So, to change the magnification
factor by increments of 0.25 or 0.5, I had to print an expression into
bc in a command substitution.

The math that people want to do in bash is going to be integer the
vast majority of the time, though, and scripts are of course written
to expect integer math.

Bash could potentially detect floating point literals within
arithmetic expansions and adjust the operations to use floating point
math in that case. I believe C treats a literal 10 as an integer and a
literal 10.0 as a floating point number, for instance, so this
wouldn't really be going against the grain.

For completeness, a floating point variable attribute could
potentially be added. 'declare -i var' will cause var to be treated as
an integer, though var can be referenced within an arithmetic
expansion without this attribute. declare -f and -r (real, anybody?)
are already taken for other things, so I'm not sure what the natural
choice of option would be.

Zack



Re: readonly inconsistency with arrays

2024-06-03 Thread Zachary Santer
On Mon, Jun 3, 2024 at 6:16 PM Will Allan via Bug reports for the GNU
Bourne Again SHell  wrote:
>
> init_vars () {  readonly string="foo"  readonly int=100  readonly array=(1 2)

My understanding is that the readonly builtin isn't supposed to handle
compound assignment syntax like the declare and local builtins
do.[1][2] That it might try to anyway is likely unintended.

Your best bet is to do:
  array=(1 2)
  readonly array
instead of trying to combine the two. This should give you the
behavior you expect.

[1]: 
https://www.gnu.org/software/bash/manual/html_node/Bourne-Shell-Builtins.html#index-readonly
[2]: https://www.gnu.org/software/bash/manual/html_node/Arrays.html



Re: printf -u "$fd"?

2024-05-22 Thread Zachary Santer
On Wed, May 22, 2024 at 12:32 AM Zachary Santer  wrote:
>
> In my Rocky Linux 9.1 VM:
> $ bash --version
> GNU bash, version 5.1.8(1)-release [...]
> $ exec {fd_A}> >( cat > file_A.txt )
> $ exec {fd_B}> >( cat > file_B.txt )
> $ printf 'words\n' | tee /dev/fd/"${fd_A}" /dev/fd/"${fd_B}"
> words
> $ exec {fd_A}>&- {fd_B}>&-
> $ cat file_A.txt
> words
> $ cat file_B.txt
> words
> $ exec {fd_A}> >( tr 'w' 'W' > file_A.txt )
> $ exec {fd_B}> >( tr 'w' 'W' > file_B.txt )
> $ exec {fd_A}>&- {fd_B}>&-
> $ cat file_A.txt
> $ cat file_B.txt
> $

Yes, I missed a line there, several times actually, and then pasted that here.

$ exec {fd_A}> >( cat > file_A.txt )
$ exec {fd_B}> >( cat > file_B.txt )
$ printf 'words\n' | tee /dev/fd/"${fd_A}" /dev/fd/"${fd_B}"
words
$ exec {fd_A}>&- {fd_B}>&-
$ cat file_A.txt
words
$ cat file_B.txt
words
$ exec {fd_A}> >( tr 'w' 'W' > file_A.txt )
$ exec {fd_B}> >( tr 'w' 'W' > file_B.txt )
$ printf 'words\n' | tee /dev/fd/"${fd_A}" /dev/fd/"${fd_B}"
words
$ exec {fd_A}>&- {fd_B}>&-
$ cat file_A.txt
Words
$ cat file_B.txt
Words

I should really sit on these emails more often.



Re: printf -u "$fd"?

2024-05-21 Thread Zachary Santer
On Tue, May 21, 2024 at 3:06 PM Chet Ramey  wrote:
>
> On 5/21/24 11:14 AM, Zachary Santer wrote:
> > On Tue, May 21, 2024 at 9:01 AM Chet Ramey  wrote:
> >>
> >> On 5/21/24 6:17 AM, Zachary Santer wrote:
> >>
> >>> I was wondering what the relationship between the devel and master
> >>> branches was.
> >>
> >> No mystery: the devel branch captures ongoing development, gets the latest
> >> bug fixes, and is where new features appear. The master branch is for
> >> stable releases.
> >
> > But the devel branch won't get changes for bash versions beyond what's
> > in alpha right now?
>
> I'm genuinely curious how you got that from what I said, especially since
> devel has gotten changes since I released bash-5.3-alpha.

Changes I assumed would make it into master while master is bash-5.3.
It sounded like you didn't want to implement anything in devel right
now that wasn't going to make it into bash-5.3. I probably didn't
phrase that very well.

> > Do you create patches from devel and apply them to
> > master?
>
> Sort of, I take the code changes from the appropriate devel commit and
> generate patches against the previous patch level. Sometimes devel and
> previous-release diverge pretty  wildly, so it's not always
> straightforward. Then I apply the patches to master and push them as
> separate commits.
>
> > I guess, to be more clear, master is in the history of
> > bash-5.3-alpha,
>
> Yes, I apply the bash testing versions to the last stable release so
> people can, if they wish, see what's changed at a high level. Then I'll
> push bash-5.3-beta on top of alpha, and so on.
>
> > so I assume master will just fast-forward to that
> > point when you're happy with bash-5.3.
>
> Yes.
>
> > Meanwhile, devel is completely
> > off doing its own thing, in terms of the git commit history, Kind of
> > have a hard time wrapping my mind around that.
>
> It's how I set it up when I inherited the git repository. master has always
> been a history of release versions.

And there are some source code things that are different under devel
than in master that aren't meant to ever make it into master, if I
remember correctly? Kind of trying to come up with a better way to do
this that would allow master and devel to continue to serve the
purposes they do now. If you're at all interested.

> >>> I saw that you turned MULTIPLE_COPROCS=1 on by default
> >>> under devel, but haven't touched the somewhat more substantial changes
> >>> that sounded forthcoming, from that whole conversation.
> >>
> >> Which one is that?
> >
> > So this email[1] was just about the config-top.h change, I guess, but
> > in the prior one from you quoted there[2], you seemed to be
> > referencing only removing the coproc once both file descriptors
> > pointing to it have been closed by the user.
>
> I haven't committed to doing anything with how coprocs are reaped, and if I
> do it will certainly not be before bash-5.3.
>
>
> > Additionally, I was hoping the discussion of having a way to make fds
> > not CLOEXEC without a loadable builtin[3][4] would get some more
> > attention.
>
> I haven't returned to it, but kre's syntax is reasonable. The problem with
> doing it is described in
>
> https://lists.gnu.org/archive/html/bug-bash/2024-02/msg00194.html
>
> so it would take more work and thought, and it's not  a priority.
>
> > I want to say I tried to do 'tee /dev/fd/A /dev/fd/B' at
> > some point and didn't have the background knowledge to understand why
> > it wouldn't work.
>
> That depends on your /dev/fd implementation. There's nothing in that
> command that bash could affect.

Honestly, I might've been looking at a limitation of MSYS2.

In my Rocky Linux 9.1 VM:
$ bash --version
GNU bash, version 5.1.8(1)-release [...]
$ exec {fd_A}> >( cat > file_A.txt )
$ exec {fd_B}> >( cat > file_B.txt )
$ printf 'words\n' | tee /dev/fd/"${fd_A}" /dev/fd/"${fd_B}"
words
$ exec {fd_A}>&- {fd_B}>&-
$ cat file_A.txt
words
$ cat file_B.txt
words
$ exec {fd_A}> >( tr 'w' 'W' > file_A.txt )
$ exec {fd_B}> >( tr 'w' 'W' > file_B.txt )
$ exec {fd_A}>&- {fd_B}>&-
$ cat file_A.txt
$ cat file_B.txt
$

God knows what I was trying to do the first time, or what's going on
with the second set of procsubs there, but I didn't get "tee:
/dev/fd/N: No such file or directory" like I was expecting. Think I'll
leave this bit of the discussion to the people who know what they're
talking about.

> >>> Do you keep a list of TODOs and things under consideration somewhere?
> >>
> >> I do, it's more of the `folder of ideas' style.
> >
> > Did nosub[5] get in there? Just generalize how coproc fds get handled
> > into something that can be turned on or off for any fd.
>
> No, I don't think it's something I'm going to implement.

Eh well, I thought it would be valuable.



Re: printf -u "$fd"?

2024-05-21 Thread Zachary Santer
On Tue, May 21, 2024 at 9:01 AM Chet Ramey  wrote:
>
> On 5/21/24 6:17 AM, Zachary Santer wrote:
>
> > I was wondering what the relationship between the devel and master
> > branches was.
>
> No mystery: the devel branch captures ongoing development, gets the latest
> bug fixes, and is where new features appear. The master branch is for
> stable releases.

But the devel branch won't get changes for bash versions beyond what's
in alpha right now? Do you create patches from devel and apply them to
master? I guess, to be more clear, master is in the history of
bash-5.3-alpha, so I assume master will just fast-forward to that
point when you're happy with bash-5.3. Meanwhile, devel is completely
off doing its own thing, in terms of the git commit history, Kind of
have a hard time wrapping my mind around that.

> > I saw that you turned MULTIPLE_COPROCS=1 on by default
> > under devel, but haven't touched the somewhat more substantial changes
> > that sounded forthcoming, from that whole conversation.
>
> Which one is that?

So this email[1] was just about the config-top.h change, I guess, but
in the prior one from you quoted there[2], you seemed to be
referencing only removing the coproc once both file descriptors
pointing to it have been closed by the user.

Additionally, I was hoping the discussion of having a way to make fds
not CLOEXEC without a loadable builtin[3][4] would get some more
attention. I want to say I tried to do 'tee /dev/fd/A /dev/fd/B' at
some point and didn't have the background knowledge to understand why
it wouldn't work.

> > So I take it
> > MULTIPLE_COPROCS=1 will be enabled in bash-5.3 but other potential
> > changes would come later?
>
> Probably, yes.
>
> > Do you keep a list of TODOs and things under consideration somewhere?
>
> I do, it's more of the `folder of ideas' style.

Did nosub[5] get in there? Just generalize how coproc fds get handled
into something that can be turned on or off for any fd.

[1] https://lists.gnu.org/archive/html/bug-bash/2024-04/msg00149.html
[2] https://lists.gnu.org/archive/html/bug-bash/2024-04/msg00105.html
[3] https://lists.gnu.org/archive/html/bug-bash/2024-04/msg00085.html
[4] https://lists.gnu.org/archive/html/bug-bash/2024-04/msg00100.html
[5] https://lists.gnu.org/archive/html/bug-bash/2024-04/msg00087.html



Re: printf -u "$fd"?

2024-05-21 Thread Zachary Santer
On Mon, May 20, 2024 at 3:03 PM Chet Ramey  wrote:
>
> On 5/17/24 10:53 PM, Zachary Santer wrote:
>
> > So here's another tangent, but has it been considered to add an option
> > to the printf builtin to print to a given file descriptor, rather than
> > stdout? If printing to a number of different file descriptors in
> > succession, such an option would appear to have all the same benefits
> > as read's -u option.
>
> It doesn't actually save anything; it's just syntactic sugar. Since
> `printf' uses stdio internally (as opposed to `read', which uses the
> supplied file descriptor directly), you're still going to have to dup2
> it internally somewhere. dprintf could do some of the work here, and bash
> has a replacement for systems where it's missing, but that's more than I
> want to change before bash-5.3 comes out. Maybe after that.

I was wondering what the relationship between the devel and master
branches was. I saw that you turned MULTIPLE_COPROCS=1 on by default
under devel, but haven't touched the somewhat more substantial changes
that sounded forthcoming, from that whole conversation. So I take it
MULTIPLE_COPROCS=1 will be enabled in bash-5.3 but other potential
changes would come later?

Do you keep a list of TODOs and things under consideration somewhere?



printf -u "$fd"?

2024-05-17 Thread Zachary Santer
Was «difference between read -u fd and read <&"$fd"» on help-b...@gnu.org

On Thu, May 16, 2024 at 12:51 AM Kerin Millar  wrote:
>
> On Thu, 16 May 2024, at 3:25 AM, Peng Yu wrote:
> > Hi,
> >
> > It appears to me that read -u fd and read <&"$fd" achieve the same
> > result. But I may miss corner cases when they may be different.
> >
> > Is it true that they are exactly the same?
>
> They are not exactly the same. To write read -u fd is to instruct the read 
> builtin to read directly from the specified file descriptor. To write read 
> <&"$fd" entails one invocation of the dup2 syscall to duplicate the specified 
> file descriptor to file descriptor #0 and another invocation to restore it 
> once read has concluded. That's measurably slower where looping over read.

So here's another tangent, but has it been considered to add an option
to the printf builtin to print to a given file descriptor, rather than
stdout? If printing to a number of different file descriptors in
succession, such an option would appear to have all the same benefits
as read's -u option.

Zack



Re: Examples of concurrent coproc usage?

2024-04-28 Thread Zachary Santer
On Sat, Apr 27, 2024 at 1:01 PM Carl Edquist  wrote:
>
> On Mon, 22 Apr 2024, Martin D Kealey wrote:
>
> > On Mon, 22 Apr 2024, 09:17 Carl Edquist,  wrote:
> >
> >> But yeah currently a pipe with a series of records and multiple
> >> cooperating/competing readers perhaps only works if the records have a
> >> fixed size. A new readd[elim] system call like you're talking about
> >> would allow safely reading a single variable-length record at a time.
> >
> > There are other options, such as length-prefixed records, or tagged
> > (typed) records, but of course those aren't POSIX text files.
>
> That'd work for "cooperating" readers (as you put it) where they are not
> attempting to read at the same time.  Though reading a single byte at a
> time also works in that case.
>
> My thought is that (performance considerations aside), the real functional
> improvement with a new "readd" call would be with _competing_ readers
> (more than one read call waiting on the same pipe at the same time).
>
> In that case a length-prefixed or type-tagged record wouldn't seem to work
> with the regular read(2), because a single reader would not be able to
> read the length/type _and_ the corresponding record together.  You can't
> work around this by reading a byte at a time either.  That's why I said it
> would only seem to work (with read(2)) if the records have a fixed size.
> (In order to grab a whole record atomically.)
>
> But a new "readd" call would allow multiple competing readers to read,
> say, a stream of filenames from a pipe, without having to pad each one to
> PATH_MAX bytes.
>
> It seems that if there is only one reader at a given time though
> ("cooperating"), then it's just a matter of performance between
> read(2)'ing one byte at a time vs using a new readd call.
>
> ...
>
> I'm not trying to advocate for or against you contacting the kernel folks
> with your idea; it just seems to me that the scenario with multiple
> competing readers might be the strongest argument for it.

Where it's available, you'd probably want to do this with a POSIX
message queue with all the messages sent with the same priority.

If you really want a bash competing-process job-server arrangement by
passing messages through a single pipe, you might be able to use
flock(1) to ensure that only one process is allowed to read from the
pipe at a time.

If you take out this use case, a potential readd[elim] call does sound
like it's kind of just there to have a more efficient read command in
the shell.



Re: Erasing sensitive data from memory?

2024-04-21 Thread Zachary Santer
On Sun, Apr 21, 2024 at 3:05 PM Greg Wooledge  wrote:
>
>  seems to be relevant here.
> I won't say that you have malicious intent here, but a script that
> behaves in this way is just a step or two away from being a password
> intercepter.

Would you believe I searched for "password" on your wiki in relation
to this question, but ignored that article because what I was trying
to do didn't have anything to do with ssh, scp, or sftp? Consider the
advice within taken.



Erasing sensitive data from memory?

2024-04-21 Thread Zachary Santer
C23 provides memset_explicit() to ensure memory containing sensitive
data is cleared.[1] Using a function like this is necessary to avoid
compilers optimizing out the operation. Of course, bash isn't
optimizing your script for you, but consider this kind of naive
solution:

$ IFS='' read -e -r -s -p 'password: ' password
password:
$ printf '|%s|\n' "${password}"
|abc123|
$ printf -v password '%*s' "${#password}" ''
$ printf '|%s|\n' "${password}"
|  |

Does bash malloc new memory for the variable every time it's set? If
so, I'd imagine the memory storing the prior version of the variable
is free'd, but continues to contain the sensitive data.

Bash is malloc'ing and free'ing constantly, to do everything. How
difficult would it be to ensure that the value of the password
variable -- as expanded in the calls to 'printf', for instance -- is
also cleared from wherever else it might've been stored, after the
command has executed?

Maybe this could be done with a new variable attribute set with
'declare'. And then bash would have to ensure that the memory from
everywhere the variable gets set or expanded is also erased after use,
and then the contents of the variable itself are erased when the
variable is unset or as the script exits.

Would this be worthwhile at all?

[1]: 
https://www.gnu.org/software/gnulib/manual/html_node/memset_005fexplicit.html



Re: Examples of concurrent coproc usage?

2024-04-16 Thread Zachary Santer
On Tue, Apr 16, 2024 at 3:56 AM Andreas Schwab  wrote:
>
> On Apr 16 2024, Carl Edquist wrote:
>
> > Well, you _can_ shovel binary data too: (*)
> >
> >   while IFS= read -rd '' X; do printf '%s\0' "$X"; done
> >
> > and use that pattern to make a shell-only version of tee(1) (and I suppose
> > paste(1)).  Binary data doesn't work if you're reading newline-terminated
> > records, because you cannot store the NUL character in a shell
> > variable. But you can delimit your records on NULs, and use printf to
> > reproduce them.
>
> Though that will likely add a spurious null at EOF.

Just wouldn't copy over whatever might have followed the final null
byte, if we're not talking about null-terminated data.

printf_format='%s\x00'
while
  IFS='' read -r -d '' X ||
{
  [[ -n ${X} ]] &&
{
  printf_format='%s'
  true
}
  #
}
  #
do
  printf -- "${printf_format}" "${X}"
done

Might've gotten lucky with all those .so files ending in a null byte
for whatever reason.

There's no way to force this to give you the equivalent of sized
buffers. 'read -N' obviously has the same problem of trying to store
the null character in a variable. So, if you're trying to run this on
a huge text file, you're going to end up trying to shove that entire
file into a variable.



Re: Examples of concurrent coproc usage?

2024-04-15 Thread Zachary Santer
On Mon, Apr 15, 2024 at 1:57 PM Carl Edquist  wrote:
>
> the thing discussed in my last email to the list (about
> coproc fds being set close-on-exec) makes them unusable for anything
> beyond stdin/stdout/stderr.
>
> [It might sound like an obscure use case, but once you realize what you
> can do with it, it becomes the main use case.]

>From what Chet was saying, I thought something like this would still work:

$ exec {cat}> >( cat; )
$ coproc tee { { tee /dev/fd/${cat2}; } {cat2}>&"${cat}"; }
[2] 1952
tee: /dev/fd/11: No such file or directory

Just dup another fd without using exec, and then use that. Evidently
not. Evidence for what you thought was actually going on I guess.

As much as you can just printf the same thing once for each fd, that
doesn't work super well for binary data.

I've thought about splitting and recombining pipelines like this, but
I've never had a reason to.



Re: Examples of concurrent coproc usage?

2024-04-14 Thread Zachary Santer
On Sat, Apr 13, 2024 at 4:10 PM Chet Ramey  wrote:
>
> The original intent was to allow the shell to drive a long-running process
> that ran more-or-less in parallel with it. Look at examples/scripts/bcalc
> for an example of that kind of use.

$ ./bcalc
equation: -12
./bcalc: line 94: history: -1: invalid option
history: usage: history [-c] [-d offset] [n] or history -anrw
[filename] or history -ps arg [arg...]
-12
equation: exit

diff --git a/examples/scripts/bcalc b/examples/scripts/bcalc
index bc7e2b40..826eca4f 100644
--- a/examples/scripts/bcalc
+++ b/examples/scripts/bcalc
@@ -91,7 +91,7 @@ do
esac

# save to the history list
-   history -s "$EQN"
+   history -s -- "$EQN"

# run it through bc
calc "$EQN"



Re: Examples of concurrent coproc usage?

2024-04-13 Thread Zachary Santer
On Sat, Apr 13, 2024 at 2:45 PM Chet Ramey  wrote:
>
> On 4/8/24 11:44 PM, Zachary Santer wrote:
>
> > The fact that the current implementation allows the coproc fds to get
> > into process substitutions is a little weird to me. A process
> > substitution, in combination with exec, is kind of the one other way
> > to communicate with background processes through fds without using
> > FIFOs. I still have to close the coproc fds there myself, right now.
>
> So are you advocating for the shell to close coproc file descriptors
> when forking children for command substitutions, process substitutions,
> and subshells, in addition to additional coprocs? Right now, it closes
> coproc file descriptors when forking subshells.

Yes. I couldn't come up with a way that letting the coproc fds into
command substitutions could cause a problem, in the same sense that
letting them into regular (  ) subshells doesn't seem like a problem.
That bit is at least good for my arbitrary standard of "consistency,"
though.

At least in my use case, trying to use the coproc file descriptors
directly in a pipeline forced the use of a process substitution,
because I needed the coproc fds accessible in the second segment of
what would've been a three-segment pipeline. (Obviously, I'm using
'shopt -s lastpipe' here.) I ultimately chose to do 'exec {fd}> >(
command )' and redirect from one command within the second segment
into ${fd} instead of ending the second segment with '> >( command );
wait "${?}"'. In the first case, you have all the same drawbacks as
allowing the coproc fds into a subshell forked with &. In the second
case, it's effectively the same as allowing the coproc fds into the
segments of a pipeline that become subshells. I guess that would be a
concern if the segment of the pipeline in the parent shell closes the
fds to the coproc while the pipeline is still executing. That seems
like an odd thing to do, but okay.

Now that I've got my own fds that I'm managing myself, I've turned
that bit of code into a plain, three-segment pipeline, at least for
now.

> > Consider the following situation: I've got different kinds of
> > background processes going on, and I've got fds exec'd from process
> > substitutions, fds from coprocs,
>
> If you have more than one coproc, you have to manage all this yourself
> already.

Not if we manage to convince you to turn MULTIPLE_COPROCS=1 on by
default. Or if someone builds bash that way for themselves.

On Sat, Apr 13, 2024 at 2:51 PM Chet Ramey  wrote:
>
> On 4/9/24 10:46 AM, Zachary Santer wrote:
>
> >> If you want two processes to communicate (really three), you might want
> >> to build with the multiple coproc support and use the shell as the
> >> arbiter.
> >
> > If you've written a script for other people than just yourself,
> > expecting all of them to build their own bash install with a
> > non-default preprocessor directive is pretty unreasonable.
>
> This all started because I wasn't comfortable with the amount of testing
> the multiple coprocs code had undergone. If we can get more people to
> test these features, there's a better chance of making it the default.
>
> > The part that I've been missing this whole time is that using exec
> > with the fds provided by the coproc keyword is actually a complete
> > solution for my use case, if I'm willing to close all the resultant
> > fds myself in background processes where I don't want them to go.
> > Which I am.
>
> Good deal.
>
> > Whether the coproc fds should be automatically kept out of most kinds
> > of subshells, like it is now; or out of more kinds than currently; is
> > kind of beside the point to me now.
>
> Sure, but it's the potential for deadlock that we're trying to reduce.

I hesitate to say to just set MULTIPLE_COPROCS=1 free and wait for
people to complain. I'm stunned at my luck in getting Carl Edquist's
attention directed at this. Hopefully there are other people who
aren't subscribed to this email list who are interested in using this
functionality, if it becomes more fully implemented.

> > But, having a builtin to ensure
> > the same behavior is applied to any arbitrary fd might be useful to
> > people, especially if those fds get removed from process substitutions
> > as well.
>
> What does this mean? What kind of builtin? And what `same behavior'?

Let's say it's called 'nosub', and takes fd arguments. It would make
the shell take responsibility for keeping those fds out of subshells.
Perhaps it could take a -u flag, to make it stop keeping the fd
arguments out of subshells. That would be a potential way to get bash
to quit closing coproc fds in subshells, as the user is declaring that
s/he is now responsible for those fds. Stil

Re: Examples of concurrent coproc usage?

2024-04-09 Thread Zachary Santer
On Mon, Apr 8, 2024 at 3:50 PM Chet Ramey  wrote:
>
> On 4/4/24 7:23 PM, Martin D Kealey wrote:
> > I'm somewhat uneasy about having coprocs inaccessible to each other.
> > I can foresee reasonable cases where I'd want a coproc to utilize one or
> > more other coprocs.
>
> That's not the intended purpose, so I don't think not fixing a bug to
> accommodate some future hypothetical use case is a good idea. That's
> why there's a warning message when you try to use more than one coproc --
> the shell doesn't keep track of more than one.

That use case is always going to be hypothetical if the support for it
isn't really there, though, isn't it?

> If you want two processes to communicate (really three), you might want
> to build with the multiple coproc support and use the shell as the
> arbiter.

If you've written a script for other people than just yourself,
expecting all of them to build their own bash install with a
non-default preprocessor directive is pretty unreasonable.

The part that I've been missing this whole time is that using exec
with the fds provided by the coproc keyword is actually a complete
solution for my use case, if I'm willing to close all the resultant
fds myself in background processes where I don't want them to go.
Which I am.

$ coproc CAT1 { cat; }
[1] 1769
$ exec {CAT1_2[0]}<&"${CAT1[0]}" {CAT1_2[1]}>&"${CAT1[1]}"
{CAT1[0]}<&- {CAT1[1]}>&-
$ declare -p CAT1 CAT1_2
declare -a CAT1=([0]="-1" [1]="-1")
declare -a CAT1_2=([0]="10" [1]="11")
$ coproc CAT2 { exec {CAT1_2[0]}<&- {CAT1_2[1]}>&-; cat; }
[2] 1771
$ exec {CAT2_2[0]}<&"${CAT2[0]}" {CAT2_2[1]}>&"${CAT2[1]}"
{CAT2[0]}<&- {CAT2[1]}>&-
$ declare -p CAT2 CAT2_2
declare -a CAT2=([0]="-1" [1]="-1")
declare -a CAT2_2=([0]="12" [1]="13")
$ printf 'dog\ncat\nrabbit\ntortoise\n' >&"${CAT1_2[1]}"
$ IFS='' read -r -u "${CAT1_2[0]}" line; printf '%s\n' "${?}:${line}"
0:dog
$ exec {CAT1_2[1]}>&-
$ IFS='' read -r -u "${CAT1_2[0]}" line; printf '%s\n' "${?}:${line}"
0:cat
[1]-  Donecoproc CAT1 { cat; }
$ IFS='' read -r -u "${CAT1_2[0]}" line; printf '%s\n' "${?}:${line}"
0:rabbit
$ IFS='' read -r -u "${CAT1_2[0]}" line; printf '%s\n' "${?}:${line}"
0:tortoise
$ IFS='' read -r -u "${CAT1_2[0]}" line; printf '%s\n' "${?}:${line}"
1:
$ exec {CAT1_2[0]}<&- {CAT2_2[0]}<&- {CAT2_2[1]}>&-
$
[2]+  Done

No warning message when creating the CAT2 coproc. I swear, I was so
close to getting this figured out three years ago, unless the behavior
when a coproc still exists only because other non-coproc fds are
pointing to it has changed since whatever version of bash I was
testing in at the time.

I am completely satisfied with this solution.

The trial and error aspect to figuring this kind of stuff out is
really frustrating. Maybe I'll take some time and write a Wooledge
Wiki article on this at some point, if there isn't one already.

Whether the coproc fds should be automatically kept out of most kinds
of subshells, like it is now; or out of more kinds than currently; is
kind of beside the point to me now. But, having a builtin to ensure
the same behavior is applied to any arbitrary fd might be useful to
people, especially if those fds get removed from process substitutions
as well. If the code for coproc fds gets applied to these fds, then
you've got more chances to see that the logic actually works
correctly, if nothing else.



Re: Examples of concurrent coproc usage?

2024-04-08 Thread Zachary Santer
On Mon, Apr 8, 2024 at 11:07 AM Chet Ramey  wrote:
>
> Bash doesn't close the file descriptor in $fd. Since it's used with `exec',
> it's under the user's control.
>
> The script here explicitly opens and closes the file descriptor, so it
> can read until read returns failure. It doesn't really matter when the
> process exits or whether the shell closes its ends of the pipe -- the
> script has made a copy that it can use for its own purposes.

> (And you need to use exec to close it when you're done.)

Caught that shortly after sending the email. Yeah, I know.

> You can do the same thing with a coproc. The question is whether or
> not scripts should have to.

If there's a way to exec fds to read from and write to the same
background process without first using the coproc keyword or using
FIFOs I'm all ears. To me, coproc fills that gap. I'd be fine with
having to close the coproc fds in subshells myself. Heck, you still
have to use exec to close at least the writing coproc fd in the parent
process to get the coproc to exit, regardless.

The fact that the current implementation allows the coproc fds to get
into process substitutions is a little weird to me. A process
substitution, in combination with exec, is kind of the one other way
to communicate with background processes through fds without using
FIFOs. I still have to close the coproc fds there myself, right now.

Consider the following situation: I've got different kinds of
background processes going on, and I've got fds exec'd from process
substitutions, fds from coprocs, and fds exec'd from other things, and
I need to keep them all out of the various background processes. Now I
need different arrays of fds, so I can close all the fds that get into
a background process forked with & without trying to close the coproc
fds there; while still being able to close all the fds, including the
coproc fds, in process substitutions.

I'm curious what the reasoning was there.



Re: Examples of concurrent coproc usage?

2024-04-03 Thread Zachary Santer
On Wed, Apr 3, 2024 at 10:32 AM Chet Ramey  wrote:
>
> How long should the shell defer deallocating the coproc after the process
> terminates? What should it do to make sure that the variables don't hang
> around with invalid file descriptors? Or should the user be responsible for
> unsetting the array variable too? (That's never been a requirement,
> obviously.)

For sake of comparison, and because I don't know the answer, what does
bash do behind the scenes in this situation?

exec {fd}< <( some command )
while IFS='' read -r line <&"${fd}"; do
  # do stuff
done
{fd}<&-

Because the command in the process substitution isn't waiting for
input, (I think) it could've exited at any point before all of its
output has been consumed. Even so, bash appears to handle this
seamlessly.

As the programmer, I know ${fd} contains an fd that's no longer valid
after this point, despite it not being unset.



Re: History Expansion in Arithmetic Expansion

2024-03-24 Thread Zachary Santer
On Sat, Mar 23, 2024 at 11:34 AM A4-Tacks  wrote:
>
>  ```bash
>  $ ((!RANDOM))
>  bash: !RANDOM: event not found
>  ```

I just reported this last August [1]. If you, like me, never use
history expansion, the best solution might be to disable it in your
.bashrc file:
set +o histexpand
or
set +H
if you hate readability.

$ printf '%s\n' "$(( !RANDOM ))"
0

[1]: https://lists.gnu.org/archive/html/bug-bash/2023-08/msg00016.html



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-24 Thread Zachary Santer
On Fri, Mar 22, 2024 at 11:23 AM Chet Ramey  wrote:
>
> This is what you can do with @K.
>
> https://lists.gnu.org/archive/html/bug-bash/2021-08/msg00119.html
>
> Word splitting doesn't happen on the rhs of an assignment statement, so you
> use eval. The @K quoting is eval-safe.

Yeah, but what can you do with @k?

$ unset assoc array assoc_copy
$ declare -A assoc=( [zero]=0 [one]=1 [two]=2 )
$ declare -a array=( "${assoc[@]@k}" )
$ eval "declare -A assoc_copy=( ${assoc[*]@K} )"
$ declare -p assoc array assoc_copy
declare -A assoc=([two]="2" [one]="1" [zero]="0" )
declare -a array=([0]="two" [1]="2" [2]="one" [3]="1" [4]="zero" [5]="0")
declare -A assoc_copy=([two]="2" [one]="1" [zero]="0" )

The difference in expansion behavior between indexed and associative
array compound assignment statements doesn't make sense. As nice as it
is to have expansions that expand to eval-safe expressions, needing
eval less would be nicer.



Re: ${var@A}; hypothetical, related parameter transformations

2024-03-24 Thread Zachary Santer
On Thu, Mar 21, 2024 at 4:08 PM Chet Ramey  wrote:
>
> On 3/20/24 3:05 PM, Zachary Santer wrote:
>
> > it's more work
> > than if there were three separate parameter transformations. one each
> > to always generate a declare command; an assignment statement; and the
> > right hand side of a compound assignment statement or standard
> > assignment statement, depending on whether it's dealing with an
> > array/assoc or scalar.
>
> I am not convinced that tripling the number of relevant variable
> transformations makes the problem any simpler.

It's simpler in the sense that the bash programmer can choose the
behavior they want and are guaranteed to get it.

On Thu, Mar 21, 2024 at 4:12 PM Chet Ramey  wrote:
>
> If you want to be guaranteed a declare command for a particular name,
> use `declare -p'. Parse the result of a (nofork) command substitution
> if you must.

That's fair. I kind of figured ${var@A} was intended to replace 'declare -p'.



Re: ${var@A}; hypothetical, related parameter transformations

2024-03-20 Thread Zachary Santer
On Wed, Mar 20, 2024 at 3:44 PM Greg Wooledge  wrote:
>
> This seems rather theoretical to me.  If the associative array has
> nothing in it, does it really matter whether it's nonexistent, only
> declared as a scope placeholder (I think that's a thing...), or fully
> declared but empty?  The second script which receives the serialized data
> doesn't really care how the first script stored it, does it?  All you
> really need is to have the keys and values replicated faithfully in the
> second script's associative array.  If that gives you an empty hash,
> then that's what you use.
>
> I simply struggle to figure out what the real-world application is for
> some of these tangents.

You're right, at least for me. In my use case, the 'declare -A' bit of
the output from ${assoc[*]@A} had scope implications I hadn't
considered, and now I'm removing that bit from the argument to eval.
What I said in my prior email about actually wanting declare commands
is entirely hypothetical.

The point I'm trying to make is that ${var@A} expanding to either a
declare command, an assignment statement, or nothing, depending on
potentially changing criteria, is unnecessarily cumbersome. In
whatever circumstances where a bash programmer would actually want a
declare statement as the expansion of ${var@A}, it doesn't suit their
purposes for that expansion to not give them that under certain
conditions.

Thus why I think dedicated parameter transformations to give a declare
command or an assignment statement would be preferable. And then
another one to give the right hand side of a compound assignment
statement, i.e. ( [0]='fox' [1]='hound' ), because that's also useful.
Maybe I'm asking for too much.



Re: ${var@A}; hypothetical, related parameter transformations

2024-03-20 Thread Zachary Santer
On Wed, Mar 20, 2024 at 11:27 AM Chet Ramey  wrote:
>
> On 3/19/24 8:56 PM, Zachary Santer wrote:
>
> > So I can get a couple of the things I want by manipulating what I get out
> > of ${var@A} with fairly straightforward parameter expansions. If I needed a
> > declare command and wasn't sure if I would get one, I haven't found a way
> > to expand that to something that is guaranteed to be a declare command
> > maintaining attribute settings. I think you're stuck using an if block at
> > that point.
>
> That expansion will produce a declare command if the variable has any
> attributes, since it needs one to reproduce them. If the variable has a
> value, but no attributes, you'll get an assignment  statement. If there
> are no attributes and no value, you get nothing.

I hadn't even considered what to do with a variable that's declared,
but without attributes or value.

Let's say we've got two variables defined in a function like so:

func () {
  local -ir number=12
  local color='turquoise;
  local number_message
  generate_declare_command number number_message
  local color_message
  generate_declare_command color color_message
}

I want a declare command, no matter what ${var@A} gives me. I have to
write a function for that: generate_declare_command (). That function
can work a couple of reasonable ways:

generate_declare_command () {
  local -n var="${1}"
  local -n message="${2}"
  message="${var@A}"
  message="${message:-${!var}}"
  if [[ ${message} != 'declare -'* ]]; then
message="declare -- ${message}"
  fi
}

or

generate_declare_command () {
  local -n var="${1}"
  local -n message="${2}"
  local var_attrs="${var@a}"
  message="${var@A}"
  message="${message:-${!var}}"
  message="declare -${var_attrs:--} ${message#declare * }"
}

Doing this is more work than normalizing the ${var@A} output into an
assignment statement or whatever's on the right hand side of the first
equals sign, each of which can be accomplished with a single parameter
expansion. (Not that either of those would give you anything if the
variable doesn't have a value.) If you want to normalize to any one of
these three things, working with a parameter transformation that could
expand to a declare command or an assignment statement, it's more work
than if there were three separate parameter transformations. one each
to always generate a declare command; an assignment statement; and the
right hand side of a compound assignment statement or standard
assignment statement, depending on whether it's dealing with an
array/assoc or scalar.

Consider that you're talking about changing ${var@A} to expand to a
declare command in the case that the variable has no attributes but is
local to the scope in which the expansion occurs. If you make that
change, then ${var@A} is somewhat unpredictable if you can't assume a
given bash release.



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-20 Thread Zachary Santer
On Wed, Mar 20, 2024 at 9:32 AM Lawrence Velázquez  wrote:
>
> https://lists.gnu.org/archive/html/bug-bash/2019-07/msg00056.html

Wherein he shows that Zsh can do this without eval:

> declare -a array=( a 1 b 2 c 3 )
> declare -A hash=( ${array[@]} )
> declare -p hash
> => typeset -A hash=( [a]=1 [b]=2 [c]=3 )
> declare -a array2=( ${(kv)hash[@]} )
> declare -p array2
> ==> typeset -a array2=( a 1 b 2 c 3 )

and then uses eval in his examples of how Bash could incorporate
similar behavior:

> array=( val1 "val2*[special-chars]" )
> printf -v serialized "%q " "${array[@]}"
> eval "deserialized=($serialized)"

> declare-A hash=( [key1]=val1 ['key2*[special-chars]']=val2 )
> printf -v serialized "%q " "${*hash[@]}"
> typeset -A deserialized_hash
> eval "deserialized_hash=($serialized)"

I don't get it.



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-20 Thread Zachary Santer
On Wed, Mar 20, 2024 at 12:29 AM Lawrence Velázquez  wrote:
>
> This isn't specific to ${var[@]@k}.
>
> $ kv1='a 1 b 2 c 3'
> $ kv2=(a 1 b 2 c 3)
> $ declare -A aa1=($kv1) aa2=(${kv2[@]}) aa3=("${kv2[@]}")
> $ declare -p aa1 aa2 aa3
> declare -A aa1=(["a 1 b 2 c 3"]="" )
> declare -A aa2=(["a 1 b 2 c 3"]="" )
> declare -A aa3=(["a 1 b 2 c 3"]="" )
>
> A couple of previous discussions:
>   - https://lists.gnu.org/archive/html/bug-bash/2020-12/msg00066.html
>   - https://lists.gnu.org/archive/html/bug-bash/2023-06/msg00128.html

There I go, reporting a bug that isn't a bug again.

One would think that enabling this behavior would be the entire
purpose of the alternate ( key value ) syntax. If it doesn't do that,
what benefit does it give over the standard ( [key]=value ) syntax?
Maybe it;s easier to use eval with?



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-19 Thread Zachary Santer
On Tue, Mar 19, 2024 at 11:18 PM Zachary Santer  wrote:
>
> Repeat-By:
>
> $ declare -A assoc_1=( [key 0]='value 0' [key 1]='value 1' [key
> 2]='value 2' [key 3]='value 3' )
> $ unset assoc_2
> $ declare -A assoc_2
> $ printf '|%s|\n' "${assoc_1[*]@k}"
> |key 2 value 2 key 3 value 3 key 0 value 0 key 1 value 1|
> # All one word. Makes sense.
> $ assoc_2=( "${assoc_1[*]@k}" )
> $ declare -p assoc_2
> declare -A assoc_2=(["key 2 value 2 key 3 value 3 key 0 value 0 key 1
> value 1"]="" )
> # Still makes sense.
> $ printf '|%s|\n' "${assoc_1[@]@k}"
> |key 2|
> |value 2|
> |key 3|
> |value 3|
> |key 0|
> |value 0|
> |key 1|
> |value 1|
> # Got expanded to separate words, like it's supposed to.
> $ assoc_2=( "${assoc_1[@]@k}" )
> $ declare -p assoc_2
> declare -A assoc_2=(["key 2 value 2 key 3 value 3 key 0 value 0 key 1
> value 1"]="" )
> # Here, it did not.

This is specific to the compound assignment syntax for an associative
array, even.

$ unset array_1
$ declare -a array_1
$ array_1=( "${assoc_1[*]@k}" )
$ declare -p array_1
declare -a array_1=([0]="key 2 value 2 key 3 value 3 key 0 value 0 key
1 value 1")
$ array_1=( "${assoc_1[@]@k}" )
$ declare -p array_1
declare -a array_1=([0]="key 2" [1]="value 2" [2]="key 3" [3]="value
3" [4]="key 0" [5]="value 0" [6]="key 1" [7]="value 1")



"${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-19 Thread Zachary Santer
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: msys
Compiler: gcc
Compilation CFLAGS: -march=nocona -msahf -mtune=generic -O2 -pipe
-D_STATIC_BUILD
uname output: MINGW64_NT-10.0-19045 Zack2021HPPavilion 3.4.10.x86_64
2024-02-10 08:39 UTC x86_64 Msys
Machine Type: x86_64-pc-msys

Bash Version: 5.2
Patch Level: 26
Release Status: release

Description:

The man page's descriptions of ${var@K} and ${var@k}:
K Produces a possibly-quoted version of the value of parameter, except
that it prints the values of indexed and associative arrays as a
sequence of quoted key-value pairs (see Arrays above).
k Like the K transformation, but expands the keys and values of
indexed and associative arrays to separate words after word splitting.

In the section on Arrays, we see:
When assigning to an associative array, the words in a compound
assignment may be either assignment statements, for which the
subscript is required, or a list of words that is interpreted as a
sequence of alternating keys and values: name=( key1 value1 key2
value2 ...). These are treated identically to name=( [key1]=value1
[key2]=value2 ...). The first word in the list determines how the
remaining words are interpreted; all assignments in a list must be of
the same type. When using key/value pairs, the keys may not be missing
or empty; a final missing value is treated like the empty string.

The "${assoc[@]@k}" transformation doesn't leave the separate
key/value words quoted. I'm not sure if the phrasing of the
documentation implies that it would or not.

As such, I would expect that
$ declare -A assoc_2=( "${assoc_1[@]@k}" )
would create assoc_2 as a duplicate of assoc_1. However, we see that
the entire expansion becomes the key for a single array element, with
its value being the empty string.

Repeat-By:

$ declare -A assoc_1=( [key 0]='value 0' [key 1]='value 1' [key
2]='value 2' [key 3]='value 3' )
$ unset assoc_2
$ declare -A assoc_2
$ printf '|%s|\n' "${assoc_1[*]@k}"
|key 2 value 2 key 3 value 3 key 0 value 0 key 1 value 1|
# All one word. Makes sense.
$ assoc_2=( "${assoc_1[*]@k}" )
$ declare -p assoc_2
declare -A assoc_2=(["key 2 value 2 key 3 value 3 key 0 value 0 key 1
value 1"]="" )
# Still makes sense.
$ printf '|%s|\n' "${assoc_1[@]@k}"
|key 2|
|value 2|
|key 3|
|value 3|
|key 0|
|value 0|
|key 1|
|value 1|
# Got expanded to separate words, like it's supposed to.
$ assoc_2=( "${assoc_1[@]@k}" )
$ declare -p assoc_2
declare -A assoc_2=(["key 2 value 2 key 3 value 3 key 0 value 0 key 1
value 1"]="" )
# Here, it did not.

If it were up to me, "${scalar@k}" wouldn't do anything different than
"${scalar}" and "${assoc[@]@K}" would expand to separate, quoted
words.

$ scalar='some words'
$ printf '|%s|\n' "${scalar@K}"
|'some words'|
$ printf '|%s|\n' "${scalar@k}"
|'some words'|
# This is inconsistent with what the @k parameter transformation does
with an array.
$ printf '|%s|\n' "${assoc_1[*]@K}"
|"key 2" "value 2" "key 3" "value 3" "key 0" "value 0" "key 1" "value 1" |
$ printf '|%s|\n' "${assoc_1[@]@K}"
|"key 2" "value 2" "key 3" "value 3" "key 0" "value 0" "key 1" "value 1" |
# Quoted [*] and [@] array expansions do things differently everywhere
else I can think of.



Re: ${var@A}; hypothetical, related parameter transformations

2024-03-19 Thread Zachary Santer
On Mon, Mar 18, 2024 at 6:26 PM Zachary Santer  wrote:
>
> I guess, in bash 5.1+, it could pass
> "${assoc[*]@K}"
> and then the receiving end could
> eval "assoc=( ${assoc_message} )"
> if I wanted to avoid declaring the associative array anew.

If I wanted to duplicate an indexed array, however, whether in sending it
to another process or not, I would need to be able to expand only the right
hand side of the compound assignment statement, i.e. ( [5]="Hello"
[12]="world!" )

> For my
> use case, if, for whatever reason, bash decided to send associative
> arrays as compound assignment statements without being in the context
> of a declare command, the receiving end would have to have already
> declared the associative array variable before eval'ing what it
> received. Given that the documentation doesn't specify when bash would
> choose to generate an assignment statement vice a declare command,
> maybe that would be the safe way to go.

Just changed what happens on the receiving end to:
eval "${assoc_message#declare * }"
to ensure the associative arrays only get declared where I want them to be.

I could have done
eval "assoc=${assoc_message#*=}"
expanding to only the right hand side of the compound assignment statement.
This seemed redundant in my case, because the associative arrays are named
the same thing in both sending and receiving processes.

So I can get a couple of the things I want by manipulating what I get out
of ${var@A} with fairly straightforward parameter expansions. If I needed a
declare command and wasn't sure if I would get one, I haven't found a way
to expand that to something that is guaranteed to be a declare command
maintaining attribute settings. I think you're stuck using an if block at
that point.


Re: ${var@A}; hypothetical, related parameter transformations

2024-03-18 Thread Zachary Santer
On Mon, Mar 18, 2024 at 6:26 PM Zachary Santer  wrote:
> {
>   declare -p assoc
>   printf '\x00'
> } >&"${fd}"
> in bash 4.2, which I'm actually pretty happy with.
Rather. No %s there.



${var@A}; hypothetical, related parameter transformations

2024-03-18 Thread Zachary Santer
Was "Re: nameref and referenced variable scope, setting other attributes"

On Mon, Mar 18, 2024 at 4:20 PM Chet Ramey  wrote:
>
> On 3/14/24 8:57 PM, Zachary Santer wrote:
> >
> > While we're kind of on the subject, I find it really odd that the
> > ${var@A} parameter expansion will expand to either an assignment
> > statement or a 'declare' command, depending on whether or not the
> > variable has an attribute set.

First of all, my use case for ${var@A} is actually passing associative
arrays between processes like they're JSON, so probably not the use
case you had in mind. I couldn't figure out how to get this to work
until I knew bash a lot better and tried ${assoc[*]@A}. In any case,
for a scalar variable, I'll just pass the value from one process to
the other, and the receiving process knows what it's supposed to be.

This will look like
printf '%s\x00' "${assoc[*]@A}" >&"${fd}"
in modern bash, or
{
  declare -p assoc
  printf '%s\x00'
} >&"${fd}"
in bash 4.2, which I'm actually pretty happy with.

In either case, the receiving end evals what it receives, as you would expect.

I guess, in bash 5.1+, it could pass
"${assoc[*]@K}"
and then the receiving end could
eval "assoc=( ${assoc_message} )"
if I wanted to avoid declaring the associative array anew.

> Yes. There is one thing missing: the transformation should expand to a
> `declare' command when applied to a local variable at the current scope,
> even if there are no attributes to be displayed. Agreed?

Again, my prefered solution would leave it up to the bash programmer
whether they want a declare command or an assignment statement. For my
use case, if, for whatever reason, bash decided to send associative
arrays as compound assignment statements without being in the context
of a declare command, the receiving end would have to have already
declared the associative array variable before eval'ing what it
received. Given that the documentation doesn't specify when bash would
choose to generate an assignment statement vice a declare command,
maybe that would be the safe way to go.

Honestly, I don't know if I ever fully considered the scope
implications of eval'ing declare commands like I am. Looking over
things again, I might have gotten lucky that this approach never
created a shadowing declaration for me.

> I am less convinced about outputting a `-g' for a global variable when
> called from a function scope, but I could be persuaded.
>
> Because of dynamic scoping, users will always have to be careful about
> using this expansion on variables that might be local variables at a
> previous function scope. I suppose it depends on the desired meaning of
> `recreate parameter'.
>
>
> > You'd think you'd want a parameter transformation that always expands
> > to a 'declare' command, and maybe another one that always expands to
> > an assignment statement.
>
> Most of the time there isn't a difference between `declare a=b' and `a=b'.

If people don't use functions most of the time.



Re: Examples of concurrent coproc usage?

2024-03-17 Thread Zachary Santer
On Thu, Mar 14, 2024 at 6:57 AM Carl Edquist  wrote:

> (And in general, latter coproc shells will have stray copies of the user
> shell's r/w ends from all previous coprocs.)

I didn't know that without MULTIPLE_COPROCS=1, bash wouldn't even
attempt to keep the fds from earlier coprocs out of later coprocs.

> Unexpectedly, the new multi-coproc code seems to close the user shell's
> end of a coprocess's pipes, once the coprocess has terminated.  When
> compiled with MULTIPLE_COPROCS=1, this is true even if there is only a
> single coproc:

> This is a bug.  The shell should not automatically close its read pipe to
> a coprocess that has terminated -- it should stay open to read the final
> output, and the user should be responsible for closing the read end
> explicitly.

> It also invites trouble if the shell variable that holds the fds gets
> removed unexpectedly when the coprocess terminates.  (Suddenly the
> variable expands to an empty string.)  It seems to me that the proper time
> to clear the coproc variable (if at all) is after the user has explicitly
> closed both of the fds.  *Or* else add an option to the coproc keyword to
> explicitly close the coproc - which will close both fds and clear the
> variable.

I agree. This was the discussion in [1], where it sounds like this was
the intended behavior. The array that bash originally created to store
the coproc fds is removed immediately, but the fds are evidently
closed at some later, indeterminate point. So, if you store the coproc
fds in a different array than the one bash gave you, you might still
be able to read from the read fd for a little while. That sounds
suspiciously like a race condition, though. The behavior without
MULTIPLE_COPROCS=1 might have changed since that discussion.

> That's a nice trick with the shell backgrounding all the coprocesses
> before connecting the fifos.  But yeah, to make subsequent coprocesses you
> do still have to close the copy of the user shell's fds that the coprocess
> shell inherits.  It sounds like you are doing that (nice!), but in any
> case it requires some care, and as these stack up it is really handy to
> have something manage it all for you.

I absolutely learned more about what I was doing from that
conversation with Chet three years ago.

> (Perhaps this is where I ask if you are happy with your solution or if you
> would like to try out something wildly more flexible...)

Admittedly, I am very curious to see your bash coprocess management
library. I don't know how you could implement coprocesses outside of
bash's coproc keyword without using FIFOs somehow.

> Happy coprocessing! :)

Thanks for your detailed description of all this.

[1] https://lists.gnu.org/archive/html/help-bash/2021-04/msg00136.html



Re: nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-14 Thread Zachary Santer
On Thu, Mar 14, 2024 at 3:43 PM Chet Ramey  wrote:
>
> In fact, before 2020, local -p with no name arguments behaved the same as
> local without arguments, which just printed all the local variable names at
> the current scope in the form of assignment statements. That was certainly
> not usable to reproduce the current state.

While we're kind of on the subject, I find it really odd that the
${var@A} parameter expansion will expand to either an assignment
statement or a 'declare' command, depending on whether or not the
variable has an attribute set.

You'd think you'd want a parameter transformation that always expands
to a 'declare' command, and maybe another one that always expands to
an assignment statement.



Re: nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-14 Thread Zachary Santer
On Thu, Mar 14, 2024 at 11:47 AM Zachary Santer  wrote:
>
> Kind of funny that 'local -g' seems to work just fine, doing the same
> thing as 'declare -g' (at least in bash 4.2), but whatever.

bash 5.2.15 as well.



Re: nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-14 Thread Zachary Santer
On Thu, Mar 14, 2024 at 11:09 AM Chet Ramey  wrote:
>
> `local' always operates at the current function scope. `local -p' only
> displays local variables that exist at the current function scope.

Oh shoot. I hadn't considered that 'local -p' and 'declare -p' would
do different things.

Kind of funny that 'local -g' seems to work just fine, doing the same
thing as 'declare -g' (at least in bash 4.2), but whatever.

Sorry for the confusion.



Re: nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-14 Thread Zachary Santer
On Thu, Mar 14, 2024 at 10:47 AM Greg Wooledge  wrote:
>
> I can't seem to duplicate this.  This is with bash 5.2:

Run the whole script from my original email. Maybe I managed to trip
bash up with something I edited out of the last email. Pay particular
attention to var_3.

#!/usr/bin/env bash

func_1 () {
  local var_1='BREAD'
  local var_2='ICE CREAM'
  local var_3='EGG'
  func_2
  printf '%s\n' "func_1:"
  local -p var_1
  local -p var_2
  local -p var_3
}

func_2 () {
  local -n nameref_1=var_1
  local -l nameref_1
  nameref_1='TOAST'
  local -nl nameref_2='VAR_2'
  nameref_2='MILKSHAKE'
  local -n nameref_3='var_3'
  nameref_3='soufflé'
  local var_4='GROUND BEEF'
  local -n nameref_4='var_4'
  local -l nameref_4
  printf '%s\n' "func_2:"
  local -p nameref_1
  local -p var_1
  local -p nameref_2
  local -p var_2
  local -p nameref_3
  local -p var_3
  local -p nameref_4
  local -p var_4
}

func_1



Re: nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-14 Thread Zachary Santer
On Thu, Mar 14, 2024 at 9:54 AM Greg Wooledge  wrote:
>
> I don't quite understand what this is saying.  Do the variables have
> different names, or the same name?  If they have different names, then
> the nameref shouldn't "hide" the other variable.  But they can't have
> the same name, because a nameref pointing to itself is a circular
> reference and won't ever work under any circumstance.
>
> hobbit:~$ f() { local -n var=var; var=1; }; f
> bash: local: warning: var: circular name reference
> bash: warning: var: circular name reference
> bash: warning: var: circular name reference
>
> You don't even need an outer calling function to see this.
>

Editing things out of my original email to focus on this point:

On Sun, Mar 10, 2024 at 7:29 PM Zachary Santer  wrote:
>
> $ cat ./nameref-what
> #!/usr/bin/env bash
>
> func_1 () {
>   local var_3='EGG'
>   func_2
>   printf '%s\n' "func_1:"
>   local -p var_3
> }
>
> func_2 () {
>   local -n nameref_3='var_3'
>   nameref_3='soufflé'
>   local var_4='GROUND BEEF'
>   local -n nameref_4='var_4'
>   local -l nameref_4
>   printf '%s\n' "func_2:"
>   local -p nameref_3
>   local -p var_3
>   local -p nameref_4
>   local -p var_4
> }
>
> func_1
>
> $ ./nameref-what
> func_2:
> declare -n nameref_3="var_3"
> ./nameref-what: line 31: local: var_3: not found
> declare -n nameref_4="var_4"
> declare -l var_4="GROUND BEEF"
> func_1:
> declare -- var_3="soufflé"

Not on a machine with bash right now. 'declare -p var_3' in func_2 ()
said var_3 was not found, despite it having just been set by assigning
a value to the nameref variable nameref_3.



Re: nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-14 Thread Zachary Santer
On Wed, Mar 13, 2024 at 3:44 PM Chet Ramey  wrote:
>
> `local' always creates variables at the current scope, or at the global
> scope if `-g' is supplied. If it's supplied the name of a nameref, it first
> resolves the nameref to find the name of the variable it's supposed to act
> on, failing if it can't. Once it has the name it needs, it creates or
> modifies the variable at the current scope. It doesn't try to create or
> modify the variable at the nameref's scope. This is one consequence of
> dynamic  scoping that affects the implementation: a nameref's value is just
> a name, not a pointer to a specific instance of a variable. Once you have
> that name, the normal scoping rules apply.
>
> If you want to look at it from a filesystem perspective, a nameref is a
> symlink, rather than a hard link.

Alright, that's all fair. But this?

On Sun, Mar 10, 2024 at 7:29 PM Zachary Santer  wrote:
>
> Additionally, a nameref variable referencing a variable declared in a calling 
> function hides that variable in the scope of the function where the nameref 
> variable is declared.



Examples of concurrent coproc usage?

2024-03-11 Thread Zachary Santer
Was "RFE: enable buffering on null-terminated data"

On Mon, Mar 11, 2024 at 7:54 AM Carl Edquist  wrote:
>
> On Sun, 10 Mar 2024, Zachary Santer wrote:
>
> > On Sun, Mar 10, 2024 at 4:36 PM Carl Edquist  wrote:
> >>
> >> Out of curiosity, do you have an example command line for your use case?
> >
> > My use for 'stdbuf --output=L' is to be able to run a command within a
> > bash coprocess.
>
> Oh, cool, now you're talking!  ;)
>
>
> > (Really, a background process communicating with the parent process
> > through FIFOs, since Bash prints a warning message if you try to run
> > more than one coprocess at a time. Shouldn't make a difference here.)
>
> (Kind of a side-note ... bash's limited coprocess handling was a long
> standing annoyance for me in the past, to the point that I wrote a bash
> coprocess management library to handle multiple active coprocess and give
> convenient methods for interaction.  Perhaps the trickiest bit about
> multiple coprocesses open at once (which I suspect is the reason support
> was never added to bash) is that you don't want the second and subsequent
> coprocesses to inherit the pipe fds of prior open coprocesses.  This can
> result in deadlock if, for instance, you close your write end to coproc1,
> but coproc1 continues to wait for input because coproc2 also has a copy of
> a write end of the pipe to coproc1's input.  So you need to be smart about
> subsequent coprocesses first closing all fds associated with other
> coprocesses.

https://lists.gnu.org/archive/html/help-bash/2021-03/msg00296.html
https://lists.gnu.org/archive/html/help-bash/2021-04/msg00136.html

You're on the money, though there is a preprocessor directive you can
build bash with that will allow it to handle multiple concurrent
coprocesses without complaining: MULTIPLE_COPROCS=1. Chet Ramey's
sticking point was that he hadn't seen coprocesses used enough in the
wild to satisfactorily test that his implementation did in fact keep
the coproc file descriptors out of subshells. If you've got examples
you can direct him to, I'd really appreciate it.

> Word to the wise: you might encounter this issue (coproc2 prevents coproc1
> from seeing its end-of-input) even though you are rigging this up yourself
> with FIFOs rather than bash's coproc builtin.)

In my case, it's mostly a non-issue, because I fork the - now three -
background processes before exec'ing automatic fds redirecting to/from
their FIFO's in the parent process. All the automatic fds get put in
an array, and I do close them all at the beginning of a subsequent
process substitution.



nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-10 Thread Zachary Santer
Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: msys
Compiler: gcc
Compilation CFLAGS: -march=nocona -msahf -mtune=generic -O2 -pipe
-D_STATIC_BUILD
uname output: MINGW64_NT-10.0-19045 Zack2021HPPavilion 3.4.10.x86_64
2024-02-10 08:39 UTC x86_64 Msys
Machine Type: x86_64-pc-msys

Bash Version: 5.2
Patch Level: 26
Release Status: release

Description:

On Sun, Mar 10, 2024 at 3:55 PM Zachary Santer  wrote:
>
> Relatedly, how would one set attributes on a variable declared in a
> calling function? 'readonly' and 'export' can do it for their
> respective attributes, but otherwise, I think you just can't.

Second-guessed myself.

The manual says about 'declare -n':
-n  Give each name the nameref attribute, making it a name reference to
another variable. That other variable is defined by the value of name. All
references, assignments, and attribute modifications to name,  except those
using or changing the -n attribute itself, are performed on the variable
referenced by name's value. The nameref attribute cannot be applied to
array variables.

local, when called on a nameref variable referencing a variable declared in
a calling function, creates a new local variable named the same as the
value of the nameref variable. Given the above, I would expect it to
instead allow the setting of attributes and value of a variable declared in
a calling function.

Additionally, a nameref variable referencing a variable declared in a
calling function hides that variable in the scope of the function where the
nameref variable is declared.

Repeat-By:

$ cat ./nameref-what
#!/usr/bin/env bash

func_1 () {
  local var_1='BREAD'
  local var_2='ICE CREAM'
  local var_3='EGG'
  func_2
  printf '%s\n' "func_1:"
  local -p var_1
  local -p var_2
  local -p var_3
}

func_2 () {
  local -n nameref_1=var_1
  local -l nameref_1
  nameref_1='TOAST'
  local -nl nameref_2='VAR_2'
  nameref_2='MILKSHAKE'
  local -n nameref_3='var_3'
  nameref_3='soufflé'
  local var_4='GROUND BEEF'
  local -n nameref_4='var_4'
  local -l nameref_4
  printf '%s\n' "func_2:"
  local -p nameref_1
  local -p var_1
  local -p nameref_2
  local -p var_2
  local -p nameref_3
  local -p var_3
  local -p nameref_4
  local -p var_4
}

func_1

$ ./nameref-what
func_2:
declare -n nameref_1="var_1"
declare -l var_1="toast"
declare -nl nameref_2="var_2"
./nameref-what: line 29: local: var_2: not found
declare -n nameref_3="var_3"
./nameref-what: line 31: local: var_3: not found
declare -n nameref_4="var_4"
declare -l var_4="GROUND BEEF"
func_1:
declare -- var_1="BREAD"
declare -- var_2="MILKSHAKE"
declare -- var_3="soufflé"

Because var_1 wasn't declared in the scope of func_2, I was expecting the
var_1 in func_1 to gain the -l attribute and the value 'toast'. local
instead declaring a new local variable var_1 in the scope of func_2 doesn't
make as much sense.

Setting the value of nameref_2 to "var_2" because of nameref_2's -l
attribute makes sense, but I'm surprised that var_2 is now hidden in the
scope of func_2. And then nameref_3 and var_3 show that that would've
happened regardless of -l.

Indeed, there is no way to set attributes other than readonly and export on
a variable declared in a calling function.


Re: "local -g" declaration references local var in enclosing scope

2024-03-10 Thread Zachary Santer
On Sun, Mar 10, 2024 at 1:51 PM Kerin Millar  wrote:
> $ y() { local -g a; a=123; echo "inner: $a"; }
> $ x; echo "outermost: $a"
> inner: 123
> outer: 123
> outermost:
>
> This may not be. There, the effect of the -g option effectively ends at the 
> outermost scope in which the variable, a, was declared. Namely, that of the x 
> function.

Pretty sure what's going on there is that "a=123" here is just setting
the value of the variable a from the innermost scope where it's
declared, and has nothing to do with declaring a different, global
variable a immediately prior.

Relatedly, how would one set attributes on a variable declared in a
calling function? 'readonly' and 'export' can do it for their
respective attributes, but otherwise, I think you just can't.



Re: Bash 5.1: Make shell_script_filename available to startup files

2024-02-20 Thread Zachary Santer
On Tue, Feb 20, 2024 at 2:38 PM alex xmb sw ratchev 
wrote:

> check if $BASH_SOURCE begins with a / , else prepend $PWD to it
> to path of file
>

source "$( dirname -- "$( realpath --canonicalize-existing --
"${BASH_SOURCE[0]}" )" )\
/common/sourced-file.bash"

What I do, anyway. Won't let symlinks cause issues.


Re: Bash 5.1: Make shell_script_filename available to startup files

2024-02-20 Thread Zachary Santer
On Tue, Feb 20, 2024 at 11:17 AM Chet Ramey  wrote:

>
> BASH_SOURCE and BASH_LINENO are part of the debugger support, and exist so
> the debugger can create a function call stack.
>
> I've definitely used them to create my own call stack. I've seen
${BASH_SOURCE[0]} given as preferable to ${0} for figuring out where the
running script is, though I don't recall the details there.


> Since startup files are read before any commands, interactive or non-
> interactive, the name of any shell script isn't in BASH_SOURCE when reading
> the startup files.
>
> Fine.


Re: Bash 5.1: Make shell_script_filename available to startup files

2024-02-17 Thread Zachary Santer
On Sat, Feb 17, 2024 at 3:44 PM Marc Aurèle La France 
wrote:

> Do ...
>
> rm -fr GREPME grepme
> echo '#! /bin/bash\nset' > GREPME
> "ln" GREPME grepme
> chmod a+x grepme
>
> ... then (case matters) ...
>
> BASH_ENV=GREPME ./grepme | fgrep -iw grepme
>
> ... gives ...
>
> BASH_ARGV=([0]="GREPME")
> BASH_ENV=GREPME
> BASH_SOURCE=([0]="GREPME")
> _=./grepme
> BASH_ENV=GREPME
> BASH_SOURCE=([0]="./grepme")
>
> ... so $_ wins, not $BASH_SOURCE[0]
>

Sorry I didn't read your original email chain from 2021 before responding
earlier.

You want a script sourced because it's given as ${BASH_ENV} to be able to
tell what script it was sourced from?

If you're satisfied with $_ or now $0, then fine, but I would actually
expect that to show up as ${BASH_SOURCE[1]} within the ${BASH_ENV} script,
which it obviously doesn't. Don't know what ${BASH_LINENO[0]} ought to be
in that case. Maybe 0?

The way the manual explains BASH_SOURCE and BASH_LINENO is all in
reference to FUNCNAME, which doesn't exist (or is empty) if you're not
executing a function. Still, with explicit sourcing, this is the behavior
we see:

zsant@Zack2021HPPavilion MINGW64 ~/random
$ cat original-file
#!/bin/bash
source ./sourced-file

zsant@Zack2021HPPavilion MINGW64 ~/random
$ cat sourced-file
declare -p BASH_SOURCE BASH_LINENO

zsant@Zack2021HPPavilion MINGW64 ~/random
$ ./original-file
declare -a BASH_SOURCE=([0]="./sourced-file" [1]="./original-file")
declare -a BASH_LINENO=([0]="2" [1]="0")

So this would seemingly be more consistent and not require a whole new
shell variable.


Re: Bash 5.1: Make shell_script_filename available to startup files

2024-02-17 Thread Zachary Santer
On Fri, Feb 16, 2024 at 9:45 PM Marc Aurèle La France 
wrote:

> On Fri, 2024-Feb-16, Zachary Santer wrote:
>
> > And you're sure ${BASH_SOURCE[0]} wasn't what you wanted all along?
>
> Yes, but that doesn't matter now.
>

At the bottom of my .bashrc file:
printf '%s\n' "Hello, this script is ${BASH_SOURCE[0]}"

I start a new terminal and get:
Hello, this script is /home/zsant/.bashrc

How is this not what you wanted?


Re: Bash 5.1: Make shell_script_filename available to startup files

2024-02-16 Thread Zachary Santer
On Fri, Feb 16, 2024 at 4:17 PM Marc Aurèle La France 
wrote:

> On Mon, 2021-Feb-01, Marc Aurèle La France wrote:
>
> > Currently, only the script's arguments are passed as positional
> > parameters.  For compatibility reasons, $0 cannot be used to also pass
> the
> > script's filename, so I'm creating a new BASH_SCRIPT variable instead.
>

And you're sure ${BASH_SOURCE[0]} wasn't what you wanted all along?


Re: declare -A +A

2024-02-08 Thread Zachary Santer
On Wed, Feb 7, 2024 at 2:24 AM Grisha Levit  wrote:

> Maybe it would be appropriate to reject a request to turn off an
> attribute that is being turned on?
>

Since this seems specific to indexed and associative arrays, it might make
more sense to just give the same error you get if you try to unset the
attribute later.

$ unset assoc array
$ assoc='spoon'
$ declare -A assoc
$ declare -p assoc
declare -A assoc=([0]="spoon" )
$ declare +A assoc
-bash: declare: assoc: cannot destroy array variables in this way
$ array='fork'
$ declare -a array
$ declare -p array
declare -a array=([0]="fork")
$ declare +a array
-bash: declare: array: cannot destroy array variables in this way


Re: Feature request: prompt strings in output from edit-and-execute-command readline function ( was About `M-C-e` expand result `'` failed )

2024-02-06 Thread Zachary Santer
On Tue, Feb 6, 2024 at 3:07 PM Chet Ramey  wrote:

> This is more like sourcing a file with `set -v' temporarily enabled (in
> fact, it's very close to that).
>

Can't imagine POSIX would be amenable to all the 'set -v' output lines
being prepended with PS2s, but that actually suffers from the same
ambiguity if stdout and stderr are going to the same place. Not that people
use 'set -v' enough to complain.


Re: About `M-C-e` expand result `'` failed

2024-02-06 Thread Zachary Santer
On Tue, Feb 6, 2024 at 3:13 PM Chet Ramey  wrote:

> There are other places (e.g., ${var@Q} where bash chooses the most
> appropriate form of quoting. Why not here?


zsant@Zack2021HPPavilion MINGW64 ~
$ animal='dog'

zsant@Zack2021HPPavilion MINGW64 ~
$ action='chases the cat'

zsant@Zack2021HPPavilion MINGW64 ~
$ printf '%s\n' "${animal}" "${action}"
dog
chases the cat

zsant@Zack2021HPPavilion MINGW64 ~
$ printf %s\n dog chases the cat # Same command as above, but with M-C-e
dognchasesnthencatn
zsant@Zack2021HPPavilion MINGW64 ~
$ printf '%s\n' "${animal@Q}" "${action@Q}"
'dog'
'chases the cat'

When Bash is looking at a line like
$ printf %s\n dog chases the cat
where should it choose to add quotes?

Compare this scenario to a hypothetical readline command that transforms
the command line so as to show the results of expansions, but leaving
quoting in place and adding backslashes, such that the end result of
evaluating the transformed line is identical to what would've happened had
the line not been transformed.

I accept the argument that you can just undo the M-C-e operation, though.


Re: About `M-C-e` expand result `'` failed

2024-02-06 Thread Zachary Santer
On Tue, Feb 6, 2024 at 2:35 PM Chet Ramey  wrote:

> and add a second bindable function -- which would add the same keypress --
> to quote the words.
>

Would Bash be saving what the line looked like before it performed quote
removal with M-C-e, and replacing what's on the command-line with that when
this other readline function is invoked? All this talk of a quote-line
readline function doesn't make any sense if Bash doesn't know how you want
it to be quoted.

This seems so obvious that I must be missing something.


Feature request: prompt strings in output from edit-and-execute-command readline function ( was About `M-C-e` expand result `'` failed )

2024-02-03 Thread Zachary Santer
On Fri, Feb 2, 2024 at 4:21 PM Chet Ramey  wrote:

> OK, I'll take that as a feature request for a future version.
>

While I'm making feature requests.

I hit C-x C-e and enter the following into my editor:

var='duck'
declare -p var
(
  var='squirrel'
  declare -p var
)
declare -p var

I save that and exit my editor, and this is what I get in the terminal:

zsant@Zack2021HPPavilion MINGW64 ~
$
var='duck'
declare -p var
declare -- var="duck"
(
  var='squirrel'
  declare -p var
)
declare -- var="squirrel"
declare -p var
declare -- var="duck"

Kinda confusing, right?

I could see adding PS1 and PS2 prompts where they would've been, had I
typed all my commands into the terminal directly. PS2 prompts in front of
everything that was entered in the editor might be preferable, though,
given the complex PS1s you see sometimes.

PS1s and PS2s:

zsant@Zack2021HPPavilion MINGW64 ~
$ var='duck'

zsant@Zack2021HPPavilion MINGW64 ~
$ declare -p var
declare -- var="duck"

zsant@Zack2021HPPavilion MINGW64 ~
$ (
>   var='squirrel'
>   declare -p var
> )
declare -- var="squirrel"

zsant@Zack2021HPPavilion MINGW64 ~
$ declare -p var
declare -- var="duck"

All PS2s:

> var='duck'
> declare -p var
declare -- var="duck"
> (
>   var='squirrel'
>   declare -p var
> )
declare -- var="squirrel"
> declare -p var
declare -- var="duck"

I do think I prefer all PS2s, myself. Makes a distinction between things
entered in a text editor with edit-and-execute-command and things entered
directly on the command line, and also makes a distinction between commands
and their output.

Would be nice.

Zack


Re: About `M-C-e` expand result `'` failed

2024-02-02 Thread Zachary Santer
First of all, I'd like to direct your attention back to the original email
in this thread, by A4-Tacks:
https://lists.gnu.org/archive/html/bug-bash/2024-01/msg00132.html
That looks a lot more like a bug, and I'm sorry I distracted from it with
my statement questioning what shell-expand-line was doing in the more
general case.

On Fri, Feb 2, 2024 at 1:26 PM Chet Ramey  wrote:

> Are you then suggesting a bindable readline function to quote the line
> or region? It's not reasonable to change a function that has worked one
> way for 30+ years.
>

Ultimately, what I'm saying is that a different bindable function that
performs all the shell expansions other than quote removal would be more
useful than shell-expand-line.

As much as Mike Jonkmans didn't address this in his email just now, that's
what I see in this exchange, as well:

On Fri, Feb 2, 2024 at 10:10 AM Chet Ramey  wrote:

> On 2/2/24 8:12 AM, Mike Jonkmans wrote:
> > I might expect that, because it would be useful to see what is going to
> be
> > executed.
>
> I don't get this part. A desire is not an expectation.


Not performing quote removal - or potentially adding backslashes - would be
more desirable. What we have in shell-expand-line seems like what the user
wanted, but not quite.

shell-expand-line didn't make a ton of sense 30+ years ago, either, but I
won't argue that its behavior should be changed, considering that
performing quote removal was apparently what was intended.

I don't even use bindable functions, really. A4-Tacks's original email just
got me curious as to what you would even do with the result if the bug he
encountered wasn't present. I finally look in that part of the manual, and
now I'm trying to remember C-x C-e, because edit-and-execute-command would
be really useful to me. I wasn't aware of an undo function, either, for
that matter. That function makes a lot of what I'm saying here moot, anyway.


Re: About `M-C-e` expand result `'` failed

2024-02-01 Thread Zachary Santer
On Thu, Feb 1, 2024 at 4:00 AM Andreas Schwab  wrote:

> The shell-expand-line function's purpose is to let you edit the expansion
> further.
>

That would still leave you needing to requote the line yourself, if there
was any quoting present originally.


Re: About `M-C-e` expand result `'` failed

2024-01-30 Thread Zachary Santer
On Tue, Jan 30, 2024 at 10:04 AM Andreas Schwab  wrote:

> The command is doing exactly what it is documented to do, that is do all
> of the shell word expansions.
>

Quote Removal shows up as a subsection of the Shell Expansion section in
the manual, but it doesn't show up in the list of expansions at the
beginning of that section.[1] Additionally, we see the following:
> After these expansions are performed, quote characters present in the
original word are removed unless they have been quoted themselves (quote
removal).
and
> After all expansions, quote removal (see Quote Removal) is performed.
Clearly setting quote removal apart from the word expansions listed in that
section.

Whether it makes sense, building from that, that some backslashes would
need to be added, like I said in my earlier email, I don't know. But quote
removal should not occur.

[1] https://www.gnu.org/software/bash/manual/html_node/Shell-Expansions.html


Re: About `M-C-e` expand result `'` failed

2024-01-30 Thread Zachary Santer
On Tue, Jan 30, 2024 at 9:11 AM Zachary Santer  wrote:

> $ var='foo'
> $ echo "$( echo '${var}' )" # enter
> ${var}
> $ echo "$( echo '${var}' )" # M-C-e
> $ echo ${var} # enter
> foo
> # Would've expected: echo "\${var}"
> $ echo $( echo '${var}' ) # enter
> ${var}
> $ echo $( echo '${var}' ) # M-C-e
> $ echo ${var} # enter
> foo
> # Would've expected: echo \${var}
> $ echo '${var}' # enter
> ${var}
> $ echo '${var}' # M-C-e
> $ echo ${var} # enter
> foo
> # Would've expected: echo '${var}'
> $ echo "${var}" # enter
> foo
> $ echo "${var}" # M-C-e
> $ echo foo
> foo
> # Would've expected: echo "foo"
>

For that matter:

$ echo \${var} # enter
${var}
$ echo \${var} # M-C-e
$ echo ${var} # enter
foo
# Would've expected: echo \${var}

There's no way this is the intended behavior, right?


Re: About `M-C-e` expand result `'` failed

2024-01-30 Thread Zachary Santer
On Tue, Jan 30, 2024 at 6:48 AM Zachary Santer  wrote:

> Would it be unreasonable to expect that the argument to the outermost echo
> after M-C-e still be in double-quotes? Or potentially placed in
> single-quotes. That way, the line would be evaluated the same way as it
> would've been without M-C-e.
>
Double-quoting is probably the way to go in that case, but then the M-C-e
logic has to also be smart enough to backslash escape things so the line
gets evaluated the same way as it otherwise would've been. And
single-quotes should be left in place as well.

$ var='foo'
$ echo "$( echo '${var}' )" # enter
${var}
$ echo "$( echo '${var}' )" # M-C-e
$ echo ${var} # enter
foo
# Would've expected: echo "\${var}"
$ echo $( echo '${var}' ) # enter
${var}
$ echo $( echo '${var}' ) # M-C-e
$ echo ${var} # enter
foo
# Would've expected: echo \${var}
$ echo '${var}' # enter
${var}
$ echo '${var}' # M-C-e
$ echo ${var} # enter
foo
# Would've expected: echo '${var}'
$ echo "${var}" # enter
foo
$ echo "${var}" # M-C-e
$ echo foo
foo
# Would've expected: echo "foo"

>From the manual:
> shell-expand-line (M-C-e)
>  Expand the line as the shell does. This performs alias and history
expansion as well as all of the shell word expansions. See HISTORY
EXPANSION below for a description of history expansion.
Doesn't list quote removal.


Re: About `M-C-e` expand result `'` failed

2024-01-30 Thread Zachary Santer
Even if this doesn't happen, you still don't end up with the command line
you probably wanted.

$ echo "$(echo $'ab\ncd\nef')" # enter
ab
cd
ef
$ echo "$(echo $'ab\ncd\nef')" # input M-C-e
$ echo ab
cd
ef # enter
ab
-bash: ef: command not found

And yes, that did change my working directory to my home directory.

Would it be unreasonable to expect that the argument to the outermost echo
after M-C-e still be in double-quotes? Or potentially placed in
single-quotes. That way, the line would be evaluated the same way as it
would've been without M-C-e.

Compare:

$ echo $(echo $'ab\ncd\nef') # enter
ab cd ef
$ echo $(echo $'ab\ncd\nef')  # input M-C-e
$ echo ab cd ef # enter
ab cd ef

which was kind of surprising but arguably correct.


Re: funsub questions

2023-12-17 Thread Zachary Santer
On Wed, Dec 13, 2023 at 9:29 PM Kerin Millar  wrote:
> On Wed, 13 Dec 2023 20:50:48 -0500
> Zachary Santer  wrote:
> > Would using funsubs to capture the stdout of external commands be
> > appreciably faster than using comsubs for the same?
>
> In the case of a script that would otherwise fork many times, frequently,
the difference is appreciable and can be easily measured.

As a follow-on question, why would this be implemented only now? From the
very beginning, capturing the stdout of an external command involved
forking a subshell, and soon (assuming funsubs remain when 5.3 is released)
it won't have to. It feels like something changed to make this feasible
when it hadn't been before.

Zack


Re: funsub questions

2023-12-13 Thread Zachary Santer
On Wed, Dec 13, 2023 at 11:06 PM Greg Wooledge  wrote:
> Is that on a system that lacks a process manager?  Something like
> "systemctl reload ssh" or "service ssh reload" would be preferred from
> a system admin POV, on systems that have process managers.
I am not super knowledgeable in this kind of stuff, but would that not
cause you to lose your SSH connection?

> And before bash 5.2,
>
> read -r pid < /var/run/sshd.pid && sudo kill -s SIGHUP "$pid"
>
> would be more efficient, on systems where no process manager is in use.
Yeah, that's fair. This is in the interactive shell, though, so how
long it takes to run isn't a huge concern.



Re: funsub questions

2023-12-13 Thread Zachary Santer
On Wed, Dec 13, 2023 at 9:17 PM Greg Wooledge  wrote:
> If you'd like to read the contents of a file into a variable, the
> "read" and "readarray" (aka "mapfile") builtins are usually better
> choices anyway.  $(< file) would only be useful if you want the entire
> content in a single string variable, which is a questionable idea in
> most programs.
sudo kill -s SIGHUP "$(< /var/run/sshd.pid)"
The one thing I've done with this that wasn't a bad idea.



funsub questions

2023-12-13 Thread Zachary Santer
Would there be a purpose in implementing ${< *file*; } to be the equivalent
of $(< *file* )? Does $(< *file* ) itself actually fork a subshell?

Would using funsubs to capture the stdout of external commands be
appreciably faster than using comsubs for the same?

- Zack


Re: $((expr)) allows the hexadecimal constant "0x"

2023-12-11 Thread Zachary Santer
On Mon, Dec 11, 2023 at 9:26 AM Chet Ramey  wrote:
> Part of the issue here is that distros -- Red Hat, at least -- never
> upgrade the version of bash they started with. Red Hat will `support'
> bash-4.2 as long as they support RHEL 7.
To be fair to the Red Hat people, "my scripts used to work and now
they don't" is the issue they're trying to avoid, at least within a
release.



Re: $((expr)) allows the hexadecimal constant "0x"

2023-12-11 Thread Zachary Santer
On Sun, Dec 10, 2023 at 3:56 PM Chet Ramey  wrote:
> Come on. Bash (and POSIX) define arithmetic in terms of how C does it,
> and that is an invalid C integer constant. It's not even shell-specific
> syntax like base#number; it's something that C defines. Is it worth it
> trying to be helpful, or is it better to follow the standard you say you
> do?

Just so everyone's clear:

Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: msys
Compiler: gcc
Compilation CFLAGS: -march=nocona -msahf -mtune=generic -O2 -pipe
-D_STATIC_BUILD
uname output: MINGW64_NT-10.0-19045 Zack2021HPPavilion 3.4.10.x86_64
2023-11-30 06:09 UTC x86_64 Msys
Machine Type: x86_64-pc-msys

Bash Version: 5.2
Patch Level: 21
Release Status: release

$ printf '%s\n' "$(( 0x ))";
0

In my opinion, it would be better to have this not treated as valid,
for the same reason that 10# is no longer treated as valid. And it
would probably do less script-breaking than invalidating 10# did.

Sorry if that's already been done under the devel branch. I didn't
really get that vibe.

- Zack



Re: $((expr)) allows the hexadecimal constant "0x"

2023-12-09 Thread Zachary Santer
On Thu, Nov 30, 2023 at 5:19 AM Martin D Kealey  wrote:
> > > This change will break scripts that use $((10#$somevar)) to cope with
> > > somevar having leading zeroes OR BEING EMPTY.
Beside the point, but I wanted to point out how easy this is to work around.

$ number=0196
$ unset somevar
$ printf '%s\n' "$(( 10#${number} ))"
196
$ printf '%s\n' "$(( 10#${somevar} ))"
-bash: 10#: invalid integer constant (error token is "10#")
$ printf '%s\n' "$(( 10#0${number} ))"
196
$ printf '%s\n' "$(( 10#0${somevar} ))"
0
$ printf '%s\n' "$(( 10#${number:-0} ))"
196
$ printf '%s\n' "$(( 10#${somevar:-0} ))"
0

Re: maintaining Bash scripts across versions of Bash, here's what I
do. I implement everything in the earliest version of Bash I care to
support, Bash 4.2 on RHEL 7. That sits in my bash-4.2 branch in Git.
Then I've got a bash-4.4 branch and a rhel-9 branch. As I find little
improvements I can only make in later versions of Bash, I'll implement
those changes in the first branch where they are supported. Besides
that, everything I do in the bash-4.2 branch is going to get merged
into the bash-4.4 branch, and from there into the rhel-9 branch.

Then I don't have a reason to wait a decade to take advantage of new
Bash features, while Bash installations that support them slowly
become the norm. The changes in my scripts, at least, going from
bash-4.2 to rhel-9 are pretty minimal. So implementing little things
as I come across them and dealing with some merge conflicts isn't a
big deal.

That said, this is not a large codebase, and I'm not aware of anything
in the bash-4.2 branch that wouldn't work under RHEL 9.

Zack



Re: bash tries to parse comsub in quoted PE pattern

2023-10-18 Thread Zachary Santer
On Wed, Oct 18, 2023 at 8:47 AM alex xmb sw ratchev 
wrote:

> by chet stating many times that every bash item undergoes expansion ..
>
As in, Bash expands
$ printf '%s\n' "${array["{2..6..2}"]}"
to
$ printf '%s\n' "${array[2]}" "${array[4]}" "${array[6]}"
on its way to giving you
two
four
six
That took me a while to comprehend. Alright, I take back that it isn't
documented anywhere.

u miss at least once the quotes of ['{blabla}']
>
Yeah, I was trying different, seemingly-related things.

its both the 1+ args
> $@ expands to different args
> "$@" to preserve spacesafe
>
> $* expands the same ( all args ) to one-arg
> "$*" spaceaafe
>
I know that. I don't really get what you're getting at, here.

This doesn't work either, obviously:
$ printf '%s\n' "${array["${indices[*]}"]}"
-bash: 2 4 6: syntax error in expression (error token is "4 6")
$ printf '%s\n' "${array["${indices[@]}"]}"
-bash: 2 4 6: syntax error in expression (error token is "4 6")

I guess I still want to hear about "${#@}" and $[  ]. Sorry about bringing
them up in relation to something that *is* documented.

Zack


Re: bash tries to parse comsub in quoted PE pattern

2023-10-18 Thread Zachary Santer
On Tue, Oct 17, 2023 at 5:56 PM Emanuele Torre 
wrote:

> bash-5.1$ letters=( {a..z} ); echo "${letters["{10..15}"]}"
> k l m n o p
>

Then there's the question "Was that even supposed to work like that?" If
so, you'd think it would generalize to being able to pass a series of
whitespace-delimited indices to an array expansion.

In Bash 5.2:
$ array=( zero one two three four five six )
$ printf '%s\n' "${array["{2..6..2}"]}"
two
four
six
$ printf '%s\n' "${array[{2..6..2}]}"
-bash: {2..6..2}: syntax error: operand expected (error token is
"{2..6..2}")
$ printf '%s\n' "${array["2 4 6"]}"
-bash: 2 4 6: syntax error in expression (error token is "4 6")
$ printf '%s\n' "${array[2 4 6]}"
-bash: 2 4 6: syntax error in expression (error token is "4 6")
$ printf '%s\n' "${array[2,4,6]}"
six
$ indices=(2 4 6)
$ printf '%s\n' "${array[${indices[@]}]}"
-bash: 2 4 6: syntax error in expression (error token is "4 6")
$ printf '%s\n' "${array[${indices[*]}]}"
-bash: 2 4 6: syntax error in expression (error token is "4 6")

Considering I don't think this is documented anywhere, and what's in
between the square brackets gets an arithmetic expansion applied to it, I'm
going to guess "no."

So how important is it to maintain undocumented behavior?

Why does "${#@}" expand to the same thing as "${#}"? Why is $[  ]
equivalent to $((  ))? Does that stuff need to continue to work forever?

Zack


Re: wrong variable name in error message about unbound variable?

2023-10-17 Thread Zachary Santer
On Tue, Oct 17, 2023 at 8:43 AM Zachary Santer  wrote:

> On Tue, Oct 17, 2023 at 8:00 AM Greg Wooledge  wrote:
>
>> unicorn:~$ unset -v a b c array
>> unicorn:~$ a=b b=c c=42 array[a]=foo; declare -p array
>> declare -a array=([42]="foo")
>>
> What? What is Bash doing here? Dereferencing iteratively until it finds
> something it can do arithmetic with?
>

$ unset -v a b c d array
$ a=b b=c c=d array[a]=foo; declare -p array
-bash: d: unbound variable

I guess so. That seems like a ... strange thing to implement.


Re: wrong variable name in error message about unbound variable?

2023-10-17 Thread Zachary Santer
On Tue, Oct 17, 2023 at 8:00 AM Greg Wooledge  wrote:

> unicorn:~$ unset -v a b c array
> unicorn:~$ a=b b=c c=42 array[a]=foo; declare -p array
> declare -a array=([42]="foo")
>
What? What is Bash doing here? Dereferencing iteratively until it finds
something it can do arithmetic with?


Re: [PATCH] printf: add %#s alias to %b

2023-09-07 Thread Zachary Santer
On Thu, Sep 7, 2023 at 12:55 PM Robert Elz  wrote:

> There are none, printf(3) belongs to the C committee, and they can make
> use of anything they like, at any time they like.
>
> The best we can do is use formats that make no sense for printf(1) to
> support
>

That's still assuming the goal of minimizing the discrepancies between
printf(1) and printf(3) format specifiers. As you point out, that isn't
particularly useful, and these things diverging further is now a foregone
conclusion. The only benefit, from my perspective, is allowing the
printf(1) man page to simply reference the printf(3) man page for
everything that printf(1) attempts to replicate.

Zack


Re: [PATCH] printf: add %#s alias to %b

2023-09-07 Thread Zachary Santer
The trouble with using an option flag to printf(1) to toggle the meaning of
%b is that you can't then mix format specifiers for binary literals and
backslash escape expansion within the same format string. You'd just have
to call printf(1) multiple times, which largely defeats the purpose of a
format string.

I don't know what potential uppercase/lowercase pairs of format specifiers
are free from use in any existing POSIX-like shell, but my suggestion would
be settling on one of those to take on the meaning of C2x's %b. They'd
still print '0b' or '0B' in the resulting binary literal when given the #
flag, which might be a little confusing, but this seems like the safest way
to go. It obviously still represents a divergence from C2x's printf(3), but
I think the consensus is that that's going to happen regardless.

ksh's format specifiers for arbitrary-base integer representation sound
really slick, and I'd love to see that in Bash, too, actually.

Zack


Re: Warn upon "declare -ax"

2023-09-05 Thread Zachary Santer
Woops.

On Mon, Sep 4, 2023 at 10:16 PM Zachary Santer  wrote:

> On Mon, Sep 4, 2023 at 8:46 AM Léa Gris  wrote:
>
>> There don's seem to be any warning system in Bash or other shells. As
>> long as it is not a fatal error condition and errexit is not set,
>> execution continue.
>>
>
> $ coproc cat_coproc_1 { cat; }
> [14] 7892
>
> $ coproc cat_coproc_2 { cat; }
> -bash: warning: execute_coproc: coproc [7892:cat_coproc_1] still exists
> [15] 7894
>
> $ grep -B2 -n -F -e 'warning' -- po/bash.pot
> 170-#: builtins/common.c:134 error.c:264
> 171-#, c-format
> 172:msgid "warning: "
> --
> 333-
> 334-#: builtins/complete.def:696
> 335:msgid "warning: -F option may not work as you expect"
> --
> 337-
> 338-#: builtins/complete.def:698
> 339:msgid "warning: -C option may not work as you expect"
> --
> 649-#: builtins/printf.def:734
> 650-#, c-format
> 651:msgid "warning: %s: %s"
> --
> 915-#: error.c:310
> 916-#, c-format
> 917:msgid "DEBUG warning: "
>
> The coproc thing was the only warning message I could think of right off
> hand. Listed right after the blurb about arrays not being exportable in the
> BUGS section of the man page, actually.
>
> Dumb aside, but it maybe doesn't make sense to have ampersands listed
> after them in the output from jobs:
> $ jobs
> ...
> [14]   Running coproc cat_coproc_1 { cat; } &
> [15]   Running coproc cat_coproc_2 { cat; } &
> Since ampersands don't appear in their declarations.
>
> $ declare -ax export_array=( one two three four )
>
> $ declare -p export_array
> declare -ax export_array=([0]="one" [1]="two" [2]="three" [3]="four")
>
> It leaves the export attribute set on the array variable, as well. But
> indeed, my little test script did not have access to this variable.
>
> Zack
>


Re: using exec to close a fd in a var crashes bash

2023-08-23 Thread Zachary Santer
On Wed, Aug 23, 2023 at 1:48 AM Martin D Kealey 
wrote:

> Chopping and changing behaviour in "permanent" releases creates a
> maintenance nightmare.
>

Well now with Bash 5.2, we've got the varredir_close shell option,
something savvy shell programmers would probably just always use, like
lastpipe.

>From the NEWS file [1]:

o. The new `varredir_close' shell option causes bash to automatically close
   file descriptors opened with {var}&1 1>&2 2>&$fd {fd}>&-
words
$ ls /dev/fd
0  1  10  2  3
$ exec {fd}>&-
$ ls /dev/fd
0  1  2  3
$ shopt -s varredir_close
$ printf 'words\n' {fd}>&1 1>&2 2>&$fd {fd}>&-
words
$ ls /dev/fd
0  1  2  3
$ exec {fd}> this_file.txt
$ printf 'words\n' >&$fd
$ printf 'other words\n' >&$fd
$ exec {fd}>&-
$ cat this_file.txt
words
other words

Zack

[1]: https://git.savannah.gnu.org/cgit/bash.git/tree/NEWS


Re: String replacement drops leading '-e' if replacing char is a space

2023-08-14 Thread Zachary Santer
On Mon, Aug 14, 2023 at 1:27 AM Eduardo Bustamante 
wrote:

> The echo command is consuming the '-e', as it is a flag.  Instead, try
> using:
>
>   printf '%s\n' "${x/,/ }"
>

Or just redefine your echo, lol.

echo() {
  local IFS=' '
  printf '%s\n' "${*}"
}

Nah, just don't ever use echo for printing variables. That's just a rule of
thumb that everyone needs to know.


Re: ! history expansion occurs within arithmetic substitutions

2023-08-08 Thread Zachary Santer
On Tue, Aug 8, 2023 at 3:11 PM Dale R. Worley  wrote:

> But I would not take it as given that nobody would ever want to use
> history expanstion within an arithmetic substitution.  Let me concoct an
> example:
>

Yeah, that's all fair. Admittedly, I never use history expansion, so I'm
not thinking about how it could be useful in a given context. May as well
put 'set +o histexpand' in my .bashrc and call it a day.

When messing around in interactive mode to test things I may want to do in
a script, it's nice if it actually behaves the same. There are probably
some other set and possibly shopt things I should have somewhere if that's
what I want.

- Zack


! history expansion occurs within arithmetic substitutions

2023-08-08 Thread Zachary Santer
Configuration Information:
Machine: x86_64
OS: msys
Compiler: gcc
Compilation CFLAGS: -march=nocona -msahf -mtune=generic -O2 -pipe
-D_STATIC_BUILD
uname output: MSYS_NT-10.0-19045 Zack2021HPPavilion 3.4.7.x86_64 2023-07-14
16:57 UTC x86_64 Msys
Machine Type: x86_64-pc-msys

Bash Version: 5.2
Patch Level: 15
Release Status: release

Description:
Similarly, history expansion occurs within arithmetic substitutions. This
will never, ever be what the user wants. And now I know how to make it not
happen.

Repeat-By:
$ set -o histexpand
$ number=4
$ printf '%s\n' "$(( !0 ))"
-bash: !0: event not found
$ (( !0 )); printf '%s\n' "${?}"
-bash: !0: event not found
$ printf '%s\n' "$(( !1 ))"
printf '%s\n' "$(( pacman -Suy ))"
0
$ printf '%s\n' "$(( !3 ))"
printf '%s\n' "$(( man pcre2grep ))"
-bash: man pcre2grep : syntax error in expression (error token is
"pcre2grep ")
$ printf '%s\n' "$(( !number ))"
printf '%s\n' "$(( number=4 ))"
4
$ set +o histexpand
$ printf '%s\n' "$(( !0 ))"
1
$ (( !0 )); printf '%s\n' "${?}"
0
$ printf '%s\n' "$(( !1 ))"
0
$ printf '%s\n' "$(( !3 ))"
0
$ printf '%s\n' "$(( !number ))"
0

I'm guessing this is a known issue, and would be a gigantic pain to fix.
Wanted to point it out, just in case.

While we're at it:
$ bashbug
/usr/bin/bashbug: line 135: [: missing `]'
/usr/bin/bashbug: line 137: [: missing `]'

- Zack

On Tue, Aug 8, 2023 at 1:44 AM Emanuele Torre 
wrote:

> ! followed by a ; or another terminator  is interpreted as an history
> expansion with no pattern that can never match anything.
>
>   $ !; echo hi
>   bash: !: event not found
>   $ !&& echo hi
>   bash: !: event not found
>
> It should not be intepreted as a history expansion that cannot possibly
> match anything; it should work the same it works if there is a space
> after `!', or if histexpand is off.
>
>   $ ! ; echo "$?" hi
>   1 hi
>   $ ! && echo hi
>   bash: syntax error near unexpected token `&&'
>
> o/
>  emanuele6
>
>


Re: suggestion: shell option for echo to not interpret any argument as an option

2023-07-26 Thread Zachary Santer
On Wed, Jul 26, 2023 at 10:25 AM Kerin Millar wrote:
> echo() { local IFS=' '; printf '%s\n' "$*"; }

There's a simple solution. Ha. Thank you.


Re: suggestion: shell option for echo to not interpret any argument as an option

2023-07-26 Thread Zachary Santer
I managed to set xpg_echo on in bash and then forget that I did that. I was
using echo's behavior with -n to determine if set -o posix had taken
effect, when it's completely unrelated. And sh in MSYS2 is definitely just
bash with set -o posix on. What I get for rushing.

However, the man page for bash 5.2 only says the following about xpg_echo,
in the section about shopt:
"If set, the echo builtin expands backslash-escape sequences by default."

It makes no mention of disabling option processing.

Similarly, in the description of the echo command, we see:
"The xpg_echo shell option may be used to dynamically determine whether or
not echo expands these escape characters by default."

So is that a bug in the documentation?

On Wed, Jul 26, 2023 at 10:51 AM Chet Ramey  wrote:

> On 7/26/23 10:15 AM, Zachary Santer wrote:
> > Oh, that's weird. I just assumed that sh would be running bash with 'set
> -o
> > posix'. Evidently, not in MSYS2. 'man sh' takes me to the Bash man page.
>
> Weird. I don't use MSYS2, but that's how it works on Unix/Linux systems.
>
> > When I run sh, 'set -o posix' has no effect, but it definitely makes
> echo
> > not interpret any arguments as options when I'm in bash.
>
> Not quite, at least on Unix/Linux/macOS. `set -o posix' by itself doesn't
> affect echo's behavior with respect to accepting options or expanding
> backslash-escapes in the remaining arguments. There's too much existing
> code to try and bother with that. The `xpg_echo' shell option disables
> option processing and enables backslash-escape translation, which is the
> POSIX/XSI required behavior.
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>
>


Re: suggestion: shell option for echo to not interpret any argument as an option

2023-07-26 Thread Zachary Santer
Oh, that's weird. I just assumed that sh would be running bash with 'set -o
posix'. Evidently, not in MSYS2. 'man sh' takes me to the Bash man page.
When I run sh, 'set -o posix' has no effect, but it definitely makes echo
not interpret any arguments as options when I'm in bash.

And I'm again not receiving Greg Wooledge's emails, so I'll just respond to
that here. None of us were born knowing how to write bash scripts, sir. For
me, that's been a combination of just doing it, reading through sections of
the manual, and of course the occasional Google search leading to Stack
Overflow. Maybe not the best way to go, but it would be nice if there
weren't as many gotchas lurking about. Maybe that's just a lost cause. Noob
me would've had to have already known about this hypothetical shell option,
which of course would not have been the case.

Eh, well. Thanks, guys.

On Wed, Jul 26, 2023 at 9:24 AM Chet Ramey  wrote:

> On 7/26/23 8:42 AM, Zachary Santer wrote:
>
> > If POSIX mandates that '--' not be taken as the end of options, then the
> > safe thing would be to simply not have echo take any options. Obviously,
> > that would break backwards compatibility, so you'd want this to be
> optional
> > behavior that the shell programmer can enable if desired.
>
> set -o posix
> shopt -s xpg_echo
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>
>


suggestion: shell option for echo to not interpret any argument as an option

2023-07-26 Thread Zachary Santer
bash's echo command is broken - YouTube


To restate what's in the video, you can't safely use echo to print the
contents of a variable that could be arbitrary, because the variable could
consist entirely of '-n', '-e', or '-E', and '--' is not interpreted as the
end of options, but rather, something to print.

I recognized this and replaced all of my calls to echo with printf some
time ago.

If POSIX mandates that '--' not be taken as the end of options, then the
safe thing would be to simply not have echo take any options. Obviously,
that would break backwards compatibility, so you'd want this to be optional
behavior that the shell programmer can enable if desired.

I guess, alternatively, there could be a shell option for echo to interpret
'--' as the end of options. However, this would require more work on the
part of whoever may be trying to resolve this issue in their scripts.

Just a thought.


Re: Bash Manual section 6.7 Arrays should mention array append notation

2022-03-24 Thread Zachary Santer
Thank you.

Also, "append to a array variable" should be "append to an array variable".

Regards,
Zack

On Thu, Mar 24, 2022 at 3:46 PM Chet Ramey  wrote:

> On 3/24/22 11:12 AM, Zachary Santer wrote:
> > I'm consulting the online manual
> > <https://www.gnu.org/software/bash/manual/html_node/index.html>, so if
> > you're looking for a version number, that would be 5.1.
> >
> > I just now looked at doc/bash.pdf in the git repo on Savannah. No info on
> > appending is present under bash-5.2-testing or devel.
>
> You're looking at two different manuals.
>
> The current doc/bashref.texi contains:
>
> "The @samp{+=} operator will append to a array variable when assigning
> using the compound assignment syntax; see @ref{Shell Parameters} above."
>
> The current doc/bash.1 contains:
>
> The += operator will append to a array variable when assigning
> using the compound assignment syntax; see
> .SM
> .B PARAMETERS
> above.
>
> It's not useful to have more than a reference to the actual description,
> where the rest of the variable assignment syntax is described, in the
> section on arrays.
>
> I don't regenerate the printed manuals on every devel branch push. You need
> to look at the manual source files.
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>


Re: Bash Manual section 6.7 Arrays should mention array append notation

2022-03-24 Thread Zachary Santer
Neither of Mr. Wooledge's responses made it into my inbox, and they're not
in my spam folder, either. I only saw them upon examining the bug-bash
Archives, wondering if what I was emailing in was getting there.

Thank you, sir, and pardon my misunderstanding.

Regards,
Zack

On Thu, Mar 24, 2022 at 11:12 AM Zachary Santer  wrote:

> I'm consulting the online manual
> <https://www.gnu.org/software/bash/manual/html_node/index.html>, so if
> you're looking for a version number, that would be 5.1.
>
> I just now looked at doc/bash.pdf in the git repo on Savannah. No info on
> appending is present under bash-5.2-testing or devel.
>
> Regards.
> Zack
>
>
> On Tue, Mar 8, 2022 at 10:55 AM Zachary Santer  wrote:
>
>> $ array=( zero one two )
>> $ array+=( three four five )
>> $ declare -p array
>> declare -a array='([0]="zero" [1]="one" [2]="two" [3]="three"
>> [4]="four" [5]="five")'
>> $ array=( [0]=zero [1]=one [2]=two )
>> $ array+=( [3]=three [4]=four [5]=five )
>> $ declare -p array
>> declare -a array='([0]="zero" [1]="one" [2]="two" [3]="three"
>> [4]="four" [5]="five")'
>> $ declare -A assoc_array=( [zero]='0' [one]='1' [two]='2' )
>> $ assoc_array+=( [three]='3' [four]='4' [five]='5' )
>> $ declare -p assoc_array
>> declare -A assoc_array='([four]="4" [one]="1" [five]="5" [zero]="0"
>> [two]="2" [three]="3" )'
>>
>> Talking about the lines with "+=", obviously. I only learned I could
>> do this when I found it in existing code.
>>
>> Regards,
>> Zack
>>
>


Re: Bash Manual section 6.7 Arrays should mention array append notation

2022-03-24 Thread Zachary Santer
I'm consulting the online manual
<https://www.gnu.org/software/bash/manual/html_node/index.html>, so if
you're looking for a version number, that would be 5.1.

I just now looked at doc/bash.pdf in the git repo on Savannah. No info on
appending is present under bash-5.2-testing or devel.

Regards.
Zack


On Tue, Mar 8, 2022 at 10:55 AM Zachary Santer  wrote:

> $ array=( zero one two )
> $ array+=( three four five )
> $ declare -p array
> declare -a array='([0]="zero" [1]="one" [2]="two" [3]="three"
> [4]="four" [5]="five")'
> $ array=( [0]=zero [1]=one [2]=two )
> $ array+=( [3]=three [4]=four [5]=five )
> $ declare -p array
> declare -a array='([0]="zero" [1]="one" [2]="two" [3]="three"
> [4]="four" [5]="five")'
> $ declare -A assoc_array=( [zero]='0' [one]='1' [two]='2' )
> $ assoc_array+=( [three]='3' [four]='4' [five]='5' )
> $ declare -p assoc_array
> declare -A assoc_array='([four]="4" [one]="1" [five]="5" [zero]="0"
> [two]="2" [three]="3" )'
>
> Talking about the lines with "+=", obviously. I only learned I could
> do this when I found it in existing code.
>
> Regards,
> Zack
>


Bash Manual section 6.7 Arrays should mention array append notation

2022-03-08 Thread Zachary Santer
$ array=( zero one two )
$ array+=( three four five )
$ declare -p array
declare -a array='([0]="zero" [1]="one" [2]="two" [3]="three"
[4]="four" [5]="five")'
$ array=( [0]=zero [1]=one [2]=two )
$ array+=( [3]=three [4]=four [5]=five )
$ declare -p array
declare -a array='([0]="zero" [1]="one" [2]="two" [3]="three"
[4]="four" [5]="five")'
$ declare -A assoc_array=( [zero]='0' [one]='1' [two]='2' )
$ assoc_array+=( [three]='3' [four]='4' [five]='5' )
$ declare -p assoc_array
declare -A assoc_array='([four]="4" [one]="1" [five]="5" [zero]="0"
[two]="2" [three]="3" )'

Talking about the lines with "+=", obviously. I only learned I could
do this when I found it in existing code.

Regards,
Zack