Is this intended behavior??

2013-02-11 Thread Bruce Korb
 /tmp
 $ echo $PS1
 \w\n\$ 
 /tmp
 $ mkdir -p ZZ/a/b/c
 /tmp
 $ pushd ZZ
 /tmp/ZZ /tmp
 /tmp/ZZ
 $ pushd a
 /tmp/ZZ/a /tmp/ZZ /tmp
 /tmp/ZZ/a
 $ pushd b/c
 /tmp/ZZ/a/b/c /tmp/ZZ/a /tmp/ZZ /tmp
 /tmp/ZZ/a/b/c
 $ popd /var/tmp
 /tmp/ZZ/a/b/c /tmp/ZZ/a /tmp/ZZ
 /tmp/ZZ/a/b/c
 $ popd /var/tmp
 /tmp/ZZ/a/b/c /tmp/ZZ/a
 /tmp/ZZ/a/b/c
 $ 

It is behaving as if it were seeing the -0 option.
But it really isn't the -0 option.  The `-N' option
probably should mention (just for clarity) that the
directory is left unchanged, just as with '-n'.

`popd'
  popd [+N | -N] [-n]

 Remove the top entry from the directory stack, and `cd' to the new
 top directory.  When no arguments are given, `popd' removes the
 top directory from the stack and performs a `cd' to the new top
 directory.  The elements are numbered from 0 starting at the first
 directory listed with `dirs'; i.e., `popd' is equivalent to `popd
 +0'.
`+N'
  Removes the Nth directory (counting from the left of the list
  printed by `dirs'), starting with zero.

`-N'
  Removes the Nth directory (counting from the right of the
  list printed by `dirs'), starting with zero.

`-n'
  Suppresses the normal change of directory when removing
  directories from the stack, so that only the stack is
  manipulated.



Waiting for _any_ background process to terminate.

2013-02-11 Thread Alan Mackenzie
Hi, bug-bash.

From a bash script, I'd like to be able to start several subtasks and
react to any one of them completing.  I don't think I can do this with
the current bash.  The `wait' function either waits on a specified subtask
to finish, or for _all_ subtasks to finish.

Am I mistaken about this perceived lack?  If not, would it be possible
to add this functionality into bash?

The sort of thing I want to do with this is to perform lots of gzippings
in separate tasks, so as to spread them amongst the cores of my 4-core
processor, always keeping 4 subtasks on the go at any time.

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: Waiting for _any_ background process to terminate.

2013-02-11 Thread Greg Wooledge
On Mon, Feb 11, 2013 at 06:59:35PM +, Alan Mackenzie wrote:
 From a bash script, I'd like to be able to start several subtasks and
 react to any one of them completing.  I don't think I can do this with
 the current bash.  The `wait' function either waits on a specified subtask
 to finish, or for _all_ subtasks to finish.

This is more of a help-bash matter than a bug-bash one.

The general approach is these cases is to set up a trap for SIGCHLD.

 The sort of thing I want to do with this is to perform lots of gzippings
 in separate tasks, so as to spread them amongst the cores of my 4-core
 processor, always keeping 4 subtasks on the go at any time.

Oh, THAT particular problem.  See
http://mywiki.wooledge.org/ProcessManagement#Advanced_questions



Re: Waiting for _any_ background process to terminate.

2013-02-11 Thread Chet Ramey
On 2/11/13 1:59 PM, Alan Mackenzie wrote:
 Hi, bug-bash.
 
 From a bash script, I'd like to be able to start several subtasks and
 react to any one of them completing.  I don't think I can do this with
 the current bash.  The `wait' function either waits on a specified subtask
 to finish, or for _all_ subtasks to finish.
 
 Am I mistaken about this perceived lack?  If not, would it be possible
 to add this functionality into bash?

Right now, you have to build your own using a SIGCHLD trap.  The next
version of bash will have `wait -n', which will wait for the next process
to change state.

Chet

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: Waiting for _any_ background process to terminate.

2013-02-11 Thread Alan Mackenzie
On Mon, Feb 11, 2013 at 03:05:25PM -0500, Chet Ramey wrote:
 On 2/11/13 1:59 PM, Alan Mackenzie wrote:
  Hi, bug-bash.

  From a bash script, I'd like to be able to start several subtasks and
  react to any one of them completing.  I don't think I can do this with
  the current bash.  The `wait' function either waits on a specified subtask
  to finish, or for _all_ subtasks to finish.

  Am I mistaken about this perceived lack?  If not, would it be possible
  to add this functionality into bash?

 Right now, you have to build your own using a SIGCHLD trap.  The next
 version of bash will have `wait -n', which will wait for the next process
 to change state.

:-)  I'll look forward to the release indeed.  Thanks.

 Chet

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: Waiting for _any_ background process to terminate.

2013-02-11 Thread Alan Mackenzie
On Mon, Feb 11, 2013 at 02:34:53PM -0500, Greg Wooledge wrote:
 On Mon, Feb 11, 2013 at 06:59:35PM +, Alan Mackenzie wrote:
  From a bash script, I'd like to be able to start several subtasks and
  react to any one of them completing.  I don't think I can do this with
  the current bash.  The `wait' function either waits on a specified subtask
  to finish, or for _all_ subtasks to finish.

 This is more of a help-bash matter than a bug-bash one.

OK.

 The general approach is these cases is to set up a trap for SIGCHLD.

  The sort of thing I want to do with this is to perform lots of gzippings
  in separate tasks, so as to spread them amongst the cores of my 4-core
  processor, always keeping 4 subtasks on the go at any time.

 Oh, THAT particular problem.  See
 http://mywiki.wooledge.org/ProcessManagement#Advanced_questions

Thanks.  An interesting page.  I'll have to study it.

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: SIGTERM ignored before exec race

2013-02-11 Thread Pádraig Brady

On 02/10/2013 08:30 PM, Chet Ramey wrote:

On 2/9/13 12:02 AM, Pádraig Brady wrote:

$ rpm -q kernel glibc bash
kernel-2.6.40.4-5.fc15.x86_64
glibc-2.14.1-6.x86_64
bash-4.2.10-4.fc15.x86_64

I notice the following will wait for 5 seconds for
the timeout process to end with SIGALRM, rather than
immediately due to kill sending the SIGTERM.


I'll take a look at making the race window smaller; there is probably
some code reordering that will have a beneficial effect.

This race exists, to a certain extent, in all Bourne-like shells.  This
problem only happens when run interactively, and it happens because
interactive shells ignore SIGTERM.  No matter how quickly you modify a
child's signal handlers after fork() returns, there's always the chance
that a kernel's scheduling policies or aome global auto-nice of child
or background processes will cause it to happen.


You might be able to do something like:

sigprocmask(sigterm_block);   // ensure parent shell doesn't get TERM
signal (SIGTERM, SIG_DFL);// reset to default for child to inherit
fork();
signal (SIGTERM, SIG_IGN);// continue to ignore TERM
sigprocmask(sigterm_unblock); // reset

cheers,
Pádraig.



[Parameter Expansion] bug in ${variable% *}

2013-02-11 Thread Dashing
Bash version: 4.2.042 

I have a script that behaves erratically:
=
#! /bin/bash
last=${1##* }
rest=${1% *}
while [[ ${rest: -1} == '\' ]]; do
last=${rest##* } $last
oldrest=$rest
rest=${rest% *}
if [[ $oldrest == $rest ]]; then
echo :--Mistake--:
#   sleep 0.01
#   rest=${rest% *}
#   if [[ $oldrest == $rest ]]; then
#   echo 'unable to interpret'
#   break
#   fi
fi
done
echo REST:$rest:
echo LAST:$last:
=
$ ./pe 'mplayer foo1\ foo2\ foo3\ 4\ 5\ foo6\ 7'
:--Mistake--:
:--Mistake--:
REST:mplayer:
LAST:foo1\ foo1\ foo1\ foo2\ foo3\ 4\ 5\ foo6\ 7:
=

What happens is that rest=${rest% *} doesn't always update $rest, 
even when there are spaces left in $rest.
Meanwhile last=${rest##* } $last works every time.
In the above example it got stuck twice when $rest was mplayer 
foo1\. It tends to vary wildly.

I have commented out some code including sleep that sometimes helps 
make the parameter expansion work. This commented out code also 
avoids an infinite loop which is triggerable in a normal way with 
an argument of:
command\ filename

Thanks,
Dashing




eval doesn't close file descriptor?

2013-02-11 Thread matei . david
With the script below, I'd expect any fd pointing to /dev/null to be closed 
when the second llfd() is executed. Surprisingly, fd 3 is closed, but fd 10 is 
now open, pointing to /dev/null, as if eval copied it instead of closing it. Is 
this a bug?

Thanks,
M


$ bash -c 'llfd () { ls -l /proc/$BASHPID/fd/; }; x=3; eval exec 
$x/dev/null; llfd; eval llfd $x-'
total 0
lrwx-- 1 matei matei 64 Feb 11 18:36 0 - /dev/pts/2
lrwx-- 1 matei matei 64 Feb 11 18:36 1 - /dev/pts/2
lrwx-- 1 matei matei 64 Feb 11 18:36 2 - /dev/pts/2
l-wx-- 1 matei matei 64 Feb 11 18:36 3 - /dev/null
lr-x-- 1 matei matei 64 Feb 11 18:36 8 - /proc/4520/auxv
total 0
lrwx-- 1 matei matei 64 Feb 11 18:36 0 - /dev/pts/2
lrwx-- 1 matei matei 64 Feb 11 18:36 1 - /dev/pts/2
l-wx-- 1 matei matei 64 Feb 11 18:36 10 - /dev/null
lrwx-- 1 matei matei 64 Feb 11 18:36 2 - /dev/pts/2
lr-x-- 1 matei matei 64 Feb 11 18:36 8 - /proc/4520/auxv
$ bash --version
GNU bash, version 4.2.24(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ 


Re: eval doesn't close file descriptor?

2013-02-11 Thread Pierre Gaston
On Tue, Feb 12, 2013 at 1:54 AM, matei.da...@gmail.com wrote:

 With the script below, I'd expect any fd pointing to /dev/null to be
 closed when the second llfd() is executed. Surprisingly, fd 3 is closed,
 but fd 10 is now open, pointing to /dev/null, as if eval copied it instead
 of closing it. Is this a bug?

 Thanks,
 M


 $ bash -c 'llfd () { ls -l /proc/$BASHPID/fd/; }; x=3; eval exec
 $x/dev/null; llfd; eval llfd $x-'
 total 0
 lrwx-- 1 matei matei 64 Feb 11 18:36 0 - /dev/pts/2
 lrwx-- 1 matei matei 64 Feb 11 18:36 1 - /dev/pts/2
 lrwx-- 1 matei matei 64 Feb 11 18:36 2 - /dev/pts/2
 l-wx-- 1 matei matei 64 Feb 11 18:36 3 - /dev/null
 lr-x-- 1 matei matei 64 Feb 11 18:36 8 - /proc/4520/auxv
 total 0
 lrwx-- 1 matei matei 64 Feb 11 18:36 0 - /dev/pts/2
 lrwx-- 1 matei matei 64 Feb 11 18:36 1 - /dev/pts/2
 l-wx-- 1 matei matei 64 Feb 11 18:36 10 - /dev/null
 lrwx-- 1 matei matei 64 Feb 11 18:36 2 - /dev/pts/2
 lr-x-- 1 matei matei 64 Feb 11 18:36 8 - /proc/4520/auxv
 $ bash --version
 GNU bash, version 4.2.24(1)-release (x86_64-pc-linux-gnu)
 Copyright (C) 2011 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later 
 http://gnu.org/licenses/gpl.html

 This is free software; you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.
 $


Note that the same happens without using eval:
$ llfd 3-
total 0
lrwx-- 1 pgas pgas 64 Feb 12 08:00 0 - /dev/pts/0
lrwx-- 1 pgas pgas 64 Feb 12 08:00 1 - /dev/pts/0
l-wx-- 1 pgas pgas 64 Feb 12 08:00 10 - /dev/null
lrwx-- 1 pgas pgas 64 Feb 12 08:00 2 - /dev/pts/0
lrwx-- 1 pgas pgas 64 Feb 12 08:00 255 - /dev/pts/0

But you need to consider what process you are examining, you use a function
and you examine the file descriptors of the process where this function
runs.

A function runs in the same process as the parent shell, if it simply
closes 3 then there will be no more fd opened on /dev/null in the parent
shell when the function returns
So what bash does is a little juggling with the file descriptors, moving 3
temporarily to be able to restore it.