Feature Request: stack
I have feature request: stack variable. Almost like current DIRSTACK with 'pushd' and 'popd', but for regular arrays. I know it can be implemented with array, where you push and pop from the end. But, a real stack is better. -- William Park
Re: New Feature Request
I agree: python seem to be more apropriated language for complex operation. Anyway, bash already offer a lot of features (like `coproc` and `read -t 0`) usefull for IPC. I wrote a little ``multiping`` bash script, as multithread demo, running many parallels ping, reading all outputs and merging them in one line. Sample: (Hitting `q` after ~4 seconds) $ multiping.sh www.google.com www.archlinux.org www.f-hauri.ch Started: PING www.google.com (172.217.168.68) 56(84) bytes of data. Started: PING www.archlinux.org (95.217.163.246) 56(84) bytes of data. Started: PING www.f-hauri.ch (62.220.134.117) 56(84) bytes of data. www.google.com www.archlinux.orgwww.f-hauri.ch 11:00:10 1 12.61 46.10 1 9.27 11:00:11 2 12.02 47.42 9.24 11:00:12 3 12.73 47.63 9.22 11:00:18 4 10.84 46.54 9.40 www.google.com 4 / 4 -> 0%err. 10.822/12.017/12.661/0.734 ms www.archlinux.org 4 / 4 -> 0%err. 46.466/47.125/47.632/0.493 ms www.f-hauri.ch 4 / 4 -> 0%err. 9.219/9.282/9.404/0.120 ms You could find them there: https://f-hauri.ch/vrac/multiping.sh.txt https://f-hauri.ch/vrac/multiping.sh On Sun, Dec 27, 2020 at 08:26:49PM +0100, Léa Gris wrote: > On 27/12/2020 at 19:30, Saint Michael wrote: > > Yes, superglobal is great. > > Maybe you should consider that Bash or shell is not the right tool for your > needs. > > If you need to manipulate complex objects, work with shared resources, Bash > is a very bad choice. If you want to stay with scripting, as you already > mentioned using Python; Python is a way better choice for dealing with the > features and requirements you describes. -- Félix Hauri -- http://www.f-hauri.ch
Re: Checking executability for asynchronous commands
On Sun, Dec 27, 2020 at 08:02:49AM -0500, Eli Schwartz wrote: > I'm not sure I understand the question? My interpretation is that for an async *simple* command of the form cmd args & where cmd cannot be executed at all (due to lack of permissions, perhaps, or because the external program is not installed), they want bash to set $? to a nonzero value to indicate that the command couldn't even be started. I've seen similar requests several times over the years. The problem is that the parent bash (the script) doesn't know, and cannot know, that the command was stillborn. Only the child bash process can know this, and by the time this information has become available, the parent bash process has already moved on. The only way the parent can obtain this information is to wait until that information becomes available. The obvious problem here is that the parent does not know when that information will become available. So, one is stuck choosing from among the following strategies: 1) After launching the async command, sleep for some fraction of a second, and then check whether the child is still running. If it isn't running, retrieve its exit status. 2) Set up a SIGCHLD handler (trap), and process the child's exit status whenever the trap fires. 3) Poll "kill -0" on the child's PID during the script's main loop. Each of these strategies has its advantages and flaws. None of them is correct for every script.
Re: Checking executability for asynchronous commands
On Mon, Dec 28, 2020 at 3:16 PM Greg Wooledge wrote: > The problem is that the parent bash (the script) doesn't know, and > cannot know, that the command was stillborn. Only the child bash > process can know this, and by the time this information has become > available, the parent bash process has already moved on. > In principle, if the parent and child were to cooperate, I think the status of the final execve() could be communicated to the parent like this: Set up a pipe between the parent and the child, with the write side set to close-on-exec, and have the parent block on the read side. If the execve() call fails, the child can send an error message via the pipe, and if it succeeds, the parent will see the pipe being closed without a message. Polling the child after some fraction of a second might not be able to tell a failed execve() apart from the exec'ed process exiting after the exec.
Re: Checking executability for asynchronous commands
On 12/28/20 8:15 AM, Greg Wooledge wrote: On Sun, Dec 27, 2020 at 08:02:49AM -0500, Eli Schwartz wrote: I'm not sure I understand the question? My interpretation is that for an async *simple* command of the form cmd args & where cmd cannot be executed at all (due to lack of permissions, perhaps, or because the external program is not installed), they want bash to set $? to a nonzero value to indicate that the command couldn't even be started. I've seen similar requests several times over the years. The problem is that the parent bash (the script) doesn't know, and cannot know, that the command was stillborn. Only the child bash process can know this, and by the time this information has become available, the parent bash process has already moved on. The only way the parent can obtain this information is to wait until that information becomes available. Actually, I don't see why one could not circumvent the entire process, and do this instead if cmd=$(type -P foo) && test -x "$foo"; then foo & else echo "error: foo could not be found or is not executable" fi But I do get the initial premise of the thread. I don't get the *defense* being offered though. The logic here seems to be completely bankrupt -- saying bash needs new features (that are not well thought out) so you don't "need" to include code to handle your intentions, is not a winning argument. The OP seems to think that "people will occasionally forget to run `wait`", and wants to know if we "care" that people will forget and if Chet will add new features to bash in order to cater to these forgetful people. This is what I don't understand. Why should we care? The official advice is to run `wait` (or perform executability checks upfront, or whatever). -- Eli Schwartz Arch Linux Bug Wrangler and Trusted User OpenPGP_signature Description: OpenPGP digital signature
Re: Checking executability for asynchronous commands
On 28/12/2020 at 21:18, Eli Schwartz wrote: if cmd=$(type -P foo) && test -x "$foo"; then foo & else echo "error: foo could not be found or is not executable" fi When you handle such logic within Bash you already lost on a race condition when foo is readable and executable when you test it, but when it reaches the actual execution, it is no longer the case. Bash is full of race conditions, because bash was never meant as a General Purpose language. Bash is a good command sequencer. Now if ppl forget to use wait PID after launching a sub-shell background command, then shame on them. -- Léa Gris
Re: Checking executability for asynchronous commands
On 12/28/20 4:45 PM, Léa Gris wrote: When you handle such logic within Bash you already lost on a race condition when foo is readable and executable when you test it, but when it reaches the actual execution, it is no longer the case. Bash is full of race conditions, because bash was never meant as a General Purpose language. Bash is a good command sequencer. Now if ppl forget to use wait PID after launching a sub-shell background command, then shame on them. The race condition doesn't matter if they anyways need to check the status by waiting on the PID, in order to handle "it executed, but resulted in failure". So it's not very tragic if the race condition resulted in users being unable to see the *enhanced* error message describing the missing dependency executable condition... this entire thread seems to just be about error reporting, right? (Though I have to wonder at these amazing AWOL commands that get uninstalled on people all the time right in the middle of their scripts. Maybe if they use a package manager for both the scripts and the dependency executables, they could fully prevent race conditions *and* not even need to check if their dependencies are installed.) -- Eli Schwartz Arch Linux Bug Wrangler and Trusted User OpenPGP_signature Description: OpenPGP digital signature
Re: Checking executability for asynchronous commands
Markus Elfring writes: > I imagine that it can be occasionally helpful to determine the execution > failure > in the synchronous way. > Would it make sense to configure the error reporting for selected asynchronous > commands so that they would become background processes only after the > required > check for executability? In many situations, you can check whether a command is executable with [[ -x name ]] As a general rule, Bash provides functionality which matches the direct way to implement an operation using Unix. In the case of "command &", the direct way is "(1) fork a subprocess, (2) the subprocess does an exec() of command". The consequences are that (a) the parent process has no information about the subprocess until it executes a wait() for the subprocess, and (b) the subprocess, while executing bash, has no information about whether "command" is executable until it executes exec() -- if the command is not executable, the exec() will fail and bash will print an error message and exit with an error, but if the command is executable, the subprocess bash is immediately replaced by the command. In particular, there is no way for it to report to the parent process that the exec() has been successfully executed. With this sequence of operations, there is no way for the parent bash to know in advance whether "command" is executable. Of course, with further operations, it could first test whether "command" is executable, but that is effectively the same as performing "[[ -x command ]]" in the bash script before performing "command &". (And it has the same difficulty regarding race conditions.) Dale