Re: Undocumented feature: Unnamed fifo '<(:)'

2020-06-28 Thread Pierre Gaston
On Sun, Jun 28, 2020 at 3:50 PM felix  wrote:

> There is maybe something to document or even create a new feature
> about open2 and open3...
>

Maybe "coproc" is already the feature you need (limited to only 1 though)?


Re: local failure

2020-05-31 Thread Pierre Gaston
On Sun, May 31, 2020 at 5:22 PM Oğuz  wrote:

> 31 Mayıs 2020 Pazar tarihinde Laurent Picquet  yazdı:
>
> > Ok, thanks for the clarification.
> >
> > This behaviour is not fully documented and I believe this should be
> > addressed.
> >
> >
> I think it is very well documented in the Simple Command Expansion section
> of the manual (
>
> https://www.gnu.org/software/bash/manual/html_node/Simple-Command-Expansion.html#Simple-Command-Expansion
> ).
>
or even
https://www.gnu.org/software/bash/manual/html_node/Bash-Builtins.html#Bash-Builtins

" The return status is zero unless local is used outside a function, an
invalid name is supplied, or name is a readonly variable"


Re: alias problem -- conflict found

2019-07-10 Thread Pierre Gaston
On Wed, Jul 10, 2019 at 2:03 PM L A Walsh  wrote:

>
>
> On 2019/07/10 00:04, Robert Elz wrote:
> > Date:Tue, 09 Jul 2019 20:24:30 -0700
> > From:L A Walsh 
> > Message-ID:  <5d255a6e.4060...@tlinx.org>
> >
> >   | Why?  What makes clarity "horrible".
> >
> > It isn't the clarity (if you call it that, it is really obscurity
> > as no-one else can read your scripts/commands and have any idea
> > what they really do) but the using of aliases that way to achieve it.
> >
> I doubt that.
>

Steven Bourne did the same tricks in C, and it lead to the creation of the
The International Obfuscated C Code Contest

https://www.ioccc.org/faq.html

Q: How did the IOCCC get started?
A: One day (23 March 1984 to be exact), back Larry Bassel and I (Landon
Curt Noll) were working for National Semiconductor's Genix porting group,
we were both in our offices trying to fix some very broken code. Larry had
been trying to fix a bug in the classic Bourne shell (C code #defined to
death to sort of look like Algol) and I had been working on the finger
program from early BSD (a bug ridden finger implementation to be sure). We
happened to both wander (at the same time) out to the hallway in Building
7C to clear our heads.

We began to compare notes: ''You won't believe the code I am trying to
fix''. And: ''Well you cannot imagine the brain damage level of the code
I'm trying to fix''. As well as: ''It more than bad code, the author really
had to try to make it this bad!''.




Re: Mozilla's Security Warning for Link For Maintainer's "Bash Page" Link ......

2019-05-05 Thread Pierre Gaston
On Sun, May 5, 2019 at 10:48 PM Harvey Rothenberg 
wrote:

>
> The reason behind my interest to check-out this page is the recent article
> in the May 2019 issue of Linux Pro Magazine's - Command-Line - New
> Commands.
>
> So I checked my laptop's version of Bash for these seven (7) new
> commands.  I found that I currently do not have them.  I wanted to see,  if
> I would have to upgrade my version of BASH to the latest + versus + adding
> each command separately (which also can be accomplished).
>

If you are referring to this article
http://www.linux-magazine.com/Issues/2019/222/New-Commands-for-Old-Purposes,
I can only see the first of these commands and it's not a bash builtin
command but an external executable.
I suspect most of these commands are like this one, so look for them in
your software repository. (eg sudo apt-get install tree)

Pierre

PS: I'm pretty sure there was already a tree command 20 years ago.


Re: How to compile hashlib.c for testing?

2018-12-27 Thread Pierre Gaston
On Fri, Dec 28, 2018 at 4:28 AM Peng Yu  wrote:

> We are talking about unit testing in the bash C source code, not bash
> scripts.
>

While toying with the loadable builtins I came up with this:

https://github.com/pgas/newt_builtin/blob/master/make.libbash

you need to set BASH_PATH  so that it points the bash sources
(eg make "BASH_PATH=/path/to/sources" after running ./configure)
and I use it as part my build so it probably needs some work to be truly
usable, but maybe you can adapt it
(or add something like it directly in the Makefile of bash and run make
libbash.a)

I basically compile a .a instead of the bash executable which ensure I
pretty much have everything that's needed.
The problem is that this "library" contains a "main".

So as a workaround I use "-Wl,--wrap=main" when linking and a  "int
__wrap_main(void) ()" instead of main in my test programs.
(basically what is used by cmocka to mock C functions
https://lwn.net/Articles/558106/)

I'm aware that it may raise more questions and difficulties for you than
actually solving anything but it is my 2 cents.
Pierre


Re: Assignment of $* to a var removes spaces on unset IFS.

2018-08-14 Thread Pierre Gaston
On Tue, Aug 14, 2018 at 6:18 AM Eduardo A. Bustamante López <
dual...@gmail.com> wrote:

> On Mon, Aug 13, 2018 at 10:52:20PM -0400, Bize Ma wrote:
> (...)
> > That version is not even beta, it is still alpha, are you asking that
> > everyone should use
> > non-released (and not yet tested as beta) alpha release ?
> (...)
>
> I didn't say that. My point is that, in the context of bug reports, it's
> important that you always test against the *unreleased* version of the
> code,
> since that's where most bug fixes are queued up (or at least the ones that
> break
> backwards compatibility).
>
> > So? It is not solved in that thread, it is even said that:
> >
> >  > This is a bug in bash and it should be fixed, not excused.
> >
> > To which I agree. After a year, nothing else have been said about it.
> >
>
> For some reason the threading broke in the archive browser. Here's the
> response
> from Chet:
>
> http://lists.nongnu.org/archive/html/bug-bash/2017-10/msg00023.html
>
> The fix for this issue is already in the Git repository (devel branch) and
> in
> Bash 5.0.
>
> I cannot answer for Chet, but please consider this:
>
> The fix for this bug breaks backwards compatibility, so that means it
> cannot be
> a patch-level fix for 4.4. It should either be a minor version increase
> (4.5) or
> go into the next major release, which is 5.0 (which was already declared
> alpha
> state, and includes the fix).
>
> > It seems about time to get it solved. Or say that it won't be.
>
> Again, it's fixed already and the fix is scheduled to go out in the next
> release (5.0).
>
> Given that, do you think this bug is severe enough to have to issue a new
> minor
> version just for it? (4.5),
>
> or can we just wait for 5.0 to come out which fixes this and a bunch of
> other
> bugs? (see the CHANGES file, in particular, entry `oo',
>
> http://git.savannah.gnu.org/cgit/bash.git/diff/CHANGES?h=bash-5.0-testing&id=9a51695bed07d37086c352372ac69d0a30039a6b
> )
>
>


Re: bash brace issues (similar to shellshock)

2018-08-06 Thread Pierre Gaston
On Mon, Aug 6, 2018 at 4:32 PM, martins dada 
wrote:

> Find attached details regarding bash brace issues. King regards.
>

you are simply assigning (){ to a temporary environment before running the
command

$  n=(){ bash -c 'echo $n'
(){

just like:

a=foo bash -c 'echo $a'

I'd agree that I would not expect bash to accept this without quotes,
but it does not allow to execute arbitrary commands like shellshock did.
At least your examples don't show this.

Your third example is best understood if you move the redirection at the
end:

n=(){ a= date >\ echo

redirection can appear anywhere around the command
As your wrote it, it looks funny but it's not different from  "date > file"


Re: Number with sign is read as octal despite a leading 10#

2018-07-10 Thread Pierre Gaston
On Tue, Jul 10, 2018 at 1:44 PM, Ilkka Virta  wrote:

> I think the problematic case here is when the number comes as input from
> some program, which might or might not print a leading sign or leading
> zeroes, but when we know that the number is, in any case, decimal.
>
> E.g. 'date' prints leading zeroes, which is easy enough to handle:
>
> hour=$(date +%H)
>
> hour=${hour#0} # remove one leading zero, or
> hour="10#$hour"# make it base-10
>
> The latter works even with more than one leading zero, but neither works
> with a sign. So, handling numbers like '-00159' gets a bit annoying:
>
> $ num='-00159'
> $ num="${num:0:1}10#${num:1}"; echo $(( num + 1 ))
> -158
>
> And that's without checking that the sign was there in the first place.
>
>
> Something like that will probably not be too common, but an easier way to
> force any number to be interpreted in base-10 (regardless of leading
> zeroes) could be useful. If there is a way, I'd be happy to hear.



It's not too complicated to separate the sign from the number eg:

for num in 159 000159 +000159 -000159;do
   echo $((${num%%[!+-]*}10#${num#[-+]}))
done


Re: Directing into a variable doesn't work

2018-06-24 Thread Pierre Gaston
On Sun, Jun 24, 2018 at 8:35 PM, Peter Passchier 
wrote:

> On 06/25/2018 12:27 AM, Robert Elz wrote:
> > That's not the real issue - rather it is that a here doc is presented to
> the
> > command beng run as a file descrptior
>
> OK, thanks, that makes sense. In the case of a here-variable, that would
> definitely be the case then.
>
> Peter
>
>
>
Also, in practice, there  is also a  good chance that the
tempfiles are written on a tmpfs filesystem  and thus memory,
and even if it is not the case with all the caching done by
the OS there is a chance your data will never actually be written on the
disk.


Re: Conditions with logical operators and nested groups execute "if" and "else"

2018-05-21 Thread Pierre Gaston
On Tue, May 22, 2018 at 12:17 AM, Uriel  wrote:

> Configuration Information:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
> -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
> -DSHELL
> -DHAVE_CONFIG_H   -I.  -I../. -I.././include -I.././lib  -Wdate-time
> -D_FORTIFY_SOURCE=2 -g -O2
> -fdebug-prefix-map=/build/bash-DWMIDv/bash-4.4.18=.
> -fstack-protector-strong -Wformat -Werror=format-security -Wall
> -Wno-parentheses -Wno-format-security
> uname output: Linux HPgS 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1
> (2018-05-07) x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
>
> Bash Version: 4.4
> Patch Level: 19
> Release Status: release
>
> Description:
> As you know, a conditional is of the type:
> if [[ EXPRESSION ]]; then TRUE CONDITION; else ALTERNATIVE RESULT; fi
>
> Or with logical operators and groups:
>
> [[ EXPRESSION ]]; && { TRUE CONDITION; } || { ALTERNATIVE RESULT; }
>
> Within each of the conditionals there may be more nested, written the first
> or
> second way they should give the same result
>

Why? || means "or" not "else"


Re: ind=`expr index "${string}" is` -----bug

2018-03-18 Thread Pierre Gaston
> From: zangwenkuo
>
> To: bug-bash@gnu.org
> when i use the fun as title (expr)
> i got a "expr: syntax error"
>
>
expr is an external tool, and expr is the one giving you an error, so it's
not a problem of "bash"
The error may be that your "expr" tool is not the gnu version and thus
doesn't support this syntax.

To get the same result you could use the pattern matching of standard expr
eg matching all the characters not in your set + one:

ind=`expr "$string" :  '[^is]*.'`

with pure (ba)sh, you could use parameter expansion like:

temp=${string%%[is]*}
ind=$(( ${#temp} + 1))


Re: which paradigms does bash support

2018-03-14 Thread Pierre Gaston
On Mon, Jan 26, 2015 at 6:05 PM, Pádraig Brady  wrote:

> On 26/01/15 13:43, Greg Wooledge wrote:
> > On Sun, Jan 25, 2015 at 08:11:41PM -0800, garegi...@gmail.com wrote:
> >> As a programming language which paradigms does bash support.
> Declarative, procedural, imperative?
> >
> > This belongs on help-b...@gnu.org so I'm Cc'ing that address.
> >
> > Shell scripts are procedural.
>
> It should be noted that shell programming is closely related to functional
> programming.
> I.E. functional programming maintains no external state and provides
> data flow synchronisation in the language.  This maps closely to the
> UNIX filter idea; data flows in and out, with no side affects to the
> system.
>
> By trying to use filters and pipes instead of procedural shell statements,
> you get the advantage of using compiled code, and implicit multicore
> support etc.
>
> cheers,
> Pádraig.
>

Though I understand what you say and maybe you can see pipes as something
functional(ish),
I believe this is a misleading statement as imo shell scripting is not even
close to be functional in any kind of way.


Re: Arm machine does not execute background statement correctly

2018-02-28 Thread Pierre Gaston
On Wed, Feb 28, 2018 at 4:03 PM, Chet Ramey  wrote:

> On 2/28/18 5:31 AM, Lakshman Garlapati wrote:
>
> > The following snippet is working fine in x86 processor machine not
> working
> > in arm processor machine from bash 4.3 version onwards.
> >
> > test.sh
> > =
> > #!/bin/bash
> > rm out.txt
> > function abc() {
> >   if [ 2 -eq 1 ]; then
> > echo "TRUE"
> >   else
> > echo "FALSE"
> >   fi
> > }
> > abc &
> >
> > bash -x test.sh
> > ===
> > + rm -f out.txt
> > + abc
> > + '[' 2 -eq 1 ']'
> > + echo TRUE  < Here we are expecting FALSE
> > TRUE
> >
> > please provide some guidance on how to resolve the problem, let me know
> if
> > problem statement is not clear.
>
> I can't reproduce it and don't have a good idea about what might be going
> wrong on your system. Is the arm version compiled with job control enabled?
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
>
>
I cannot reproduce it either (on an arm machine)

$ bash -x test.sh
[2]+echo '4.3.30(1)-release'
4.3.30(1)-release
[3]+uname -m
armv7l
[4]+rm out.txt
rm: cannot remove 'out.txt': No such file or directory
[12]+abc
$ [6]+'[' 2 -eq 1 ']'
[9]+echo FALSE
FALSE


Re: Function definitions

2018-02-26 Thread Pierre Gaston
On Mon, Feb 26, 2018 at 12:45 PM,  wrote:

>
> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-unknown-linux-gnu'
> -DCONF_VENDOR='unknown' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
> -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib  -D_FORTIFY_SOURCE=2
> -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong -fno-plt
> -DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/bin'
> -DSTANDARD_UTILS_PATH='/usr/bin' -DSYS_BASHRC='/etc/bash.bashrc'
> -DSYS_BASH_LOGOUT='/etc/bash.bash_logout' -DNON_INTERACTIVE_LOGIN_SHELLS
> -Wno-parentheses -Wno-format-security
> uname output: Linux linmac2 4.15.5-1-ARCH #1 SMP PREEMPT Thu Feb 22
> 22:15:20 UTC 2018 x86_64 GNU/Linux
> Machine Type: x86_64-unknown-linux-gnu
>
> Bash Version: 4.4
> Patch Level: 19
> Release Status: release
>
> Description:
> Bash rejects valid function definitions
>
> Repeat-By:
>
> $ func() true
> bash: syntax error near unexpected token `true'
>
> # Variant#2
> $ func() { true }
> > ^C
>
> Both forms seem to be valid per [1] and are accepted by (at least)
> ash, zsh and mksh
>
> Worth noting that the Variant#2 can be made to work in bash with an
> addition
> of a semicolon:
>
> $ func() { true; }
>
> [1] http://pubs.opengroup.org/onlinepubs/9699919799/
> utilities/V3_chap02.html
>
> --
> mailto:moos...@gmail.com
>
>
On the contrary SUS doesn't define either one, as function definition
requires a compound command

http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_05

and  it requires a delimiter in compound commands

http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_04


Re: Quoting and string comparison

2018-02-08 Thread Pierre Gaston
On Thu, Feb 8, 2018 at 4:27 PM, Jaan Vajakas  wrote:

> So should Bash report a syntax error?
>

You do not have a syntax error, the rules for quoting inside $( ) are the
same as outside, whether you use outside quotes  "$( )" or not

So in your case you are escaping the double quotes and concatenate the
literal character " with the content of your variable, and
you end up comparing the 3 characters string <"a">  with b.

echo "$(echo \"a\")" # prints "a" the inner double quotes loose their
special meaning.





> 2018-02-08 15:24 GMT+01:00 Clark Wang :
>
> > On Thu, Feb 8, 2018 at 9:05 PM, Jaan Vajakas 
> > wrote:
> >
> >> Hi!
> >>
> >> I noticed a weird behavior. Is it a bug?
> >>
> >> Namely, why does
> >>
> >> echo "$(for f in a b c; do if [[ \"$f\" > b ]]; then echo "$f > b"; else
> >> echo "$f <= b"; fi; done)"
> >>
> >
> > Should be:
> >
> >   echo "$(for f in a b c; do if [[ $f > b ]]; then echo "$f > b"; else
> > echo "$f <= b"; fi; done)"
> >
> >
> >> output
> >>
> >> a <= b
> >> b > b
> >> c > b
> >>
> >> ?
> >>
> >> I would have expected the same output as one of
> >> echo "$(for f in a b c; do if [[ "$f" > b ]]; then echo "$f > b"; else
> >> echo
> >> "$f <= b"; fi; done)"
> >> echo "$(for f in a b c; do if [[ '"$f"' > b ]]; then echo "$f > b"; else
> >> echo "$f <= b"; fi; done)"
> >>
> >> This happens e.g. on GNU Bash v4.4 (e.g.
> >> https://www.tutorialspoint.com/execute_bash_online.php ).
> >>
> >>
> >> Best regards,
> >> Jaan
> >>
> >
> >
>


Re: ~/.profile and ~/.bash_profile aren't executed on login

2017-12-09 Thread Pierre Gaston
On Sun, Dec 10, 2017 at 1:41 AM, Yuri  wrote:

> On 12/09/17 15:24, Chet Ramey wrote:
>
>> Of course not: that's not a login shell.  As the documentation says,
>>
>> "A login shell is one whose first character of argument zero is a -,  or
>> one started with the --login option."
>>
>> The INVOCATION section of the manual page explains it in exhaustive
>> detail.
>>
>
>
> Ok, but that's not what my situation is. I am just logging in, using the
> display manager, when user has /usr/local/bin/bash as the shell in passwd.
>
> Why doesn't it execute ~/.profile?
>
>
> Yuri
>
>
>
.profile is supposed to be a file that is read only once per
login, so that you can do initialization there.  When you run the
shell after login it doesn't read this file anymore (there are
other initialization files eg .bashrc that are read every time)

There is a couple of tricks coming from the history of Unix to
tell a shell that it should do this initialization and these
tricks are used by the programs handling the login when they
start a shell.

However, the way you log in to your system seems to not involve
bash (or even a shell), then when you start a terminal, bash is
invoked as a non-login interactive shell.

You can read more there:
http://mywiki.wooledge.org/DotFiles


Re: help complete: mention remove all AND restore all

2017-11-05 Thread Pierre Gaston
On Sun, Nov 5, 2017 at 6:58 PM, 積丹尼 Dan Jacobson 
wrote:

> $ help complete
>
>   -rremove a completion specification for each NAME, or, if no
> NAMEs are supplied, all completion specifications
>
> Add
> To later restore them do ...
>
> as one often wants to remove them all, try something, and then put them
> all back.
>
> I am not asking for help. I am saying $ help complete should mention
> more things.
>
>
There is no magic way to restore them, typically they are loaded by some
system wide rc file but the way this is implemented differs, and of course
users are free to do what they want.
I guess a straightforward way would be to "exec bash"


Re: Documentation issue

2017-10-26 Thread Pierre Gaston
On Thu, Oct 26, 2017 at 8:18 AM, Eli Barzilay  wrote:

> Bash surprised me with the behavior mentioned here:
>
> https://stackoverflow.com/questions/15897473
>
> This can be pretty bad in that it's very unexpected (see the comments).
> Also, the surprise can be triggered without nullglob as well:
>
> $ foo=(a b c)
> $ touch foo0
> $ unset foo[0]
> $ echo ${foo[*]}
> a b c
>
> The thing is that AFAICT, there is no mention of this pitfall in the man
> page...  It would be nice to mention using quotes in at least the
> `unset` description, and possibly also about `nullglob` too since it
> makes it easier to run into this problem.
>
> I grepped through the bash sources, and even there I found a few unsafe
> uses:
>
> grep -r 'unset[^a-z"'\'']*\[' examples tests
>
> so this is clearly something that is not well-known enough.
>
> --
>((x=>x(x))(x=>x(x)))  Eli Barzilay:
>http://barzilay.org/  Maze is Life!
>
>
I think it's even more likely to happen with eg: read array[i]

There is a large number of pitfalls in bash (
http://mywiki.wooledge.org/BashPitfalls) that most people ignore.
I'm not sure where to rank this one in the priorities of the ones that
would need to be mentioned in the manual.
Maybe one could create a separate man/info page that the manual could
reference?


Re: RFE & RFC: testing executability

2017-10-01 Thread Pierre Gaston
On Sun, Oct 1, 2017 at 7:31 AM, L A Walsh  wrote:

> I was looking at a script that tested command for execute before
> executing them.
> The script used:
>
>  cmd=$(PATH=/stdpath type -p cmd)
>
> and later tested executability with "-x $cmd", raising 2 problems. The
> first was "-p" returning empty if "cmd" was an alias or
> function.  Second was that even if "-P" was used, the "-x" test failed
> if cmd was an alias, function or preceded by 'command' (which I was
> surprised, also, to find, "not executable").
>
> I realize that -x is probably only looking for whether or not the OS
> considers its "object" to be executable, but it seems that a modern
> shell (like bash) might also include *its* *own* "objects", that it
> knows to be either executable or equivalent.  I can think of 3 cases
> where the shell could do better in assessing executability than it
> is today:
>
> First: if we have a function: by definition, a function is
> "executable".  Regardless of its contents, it is executable shell
> script and I would prefer (and have least surprise) if functions
> were always considered to be "executable" (with "-x" returning
> true).
>
> A second case: an alias could be seen (and tested) as similar to
> variable access:
>
> I.e. if an alias pointed to an executable, e.g.
> alias Ls=$(type -P ls)
>
> I'd like to see the shell intelligent enough to do an executable
> test on :
> -x "${BASH_ALIASES[Ls]}"
>
>
> and thirdly, testing, *either*, whether the object of a "command "
> is executable (preferred) -- i.e. testing '',
> *or* evaluating "command" as always being executable (-x = "always true")
> on the premise that command is used to invoke an executable program
> (circumventing any aliases or functions).
>
>
> Comments?  Reasonable?  Wanted? Doable or patchable?
>
> -linda
>

Besides the fact that most people don't use alias in script where it would
be the only place where such a feature would really be valuable, I think
the main problem is that alias can contain pretty much anything, from
loops, partial code etc..

What should be the result of -x in the  following case?

alias f='for i in 1 2;do echo foo;done'
alias g='if true; then'
alias h='true;missing_command'

Should it be recursive?
alias j=f

In my opinion the feature you describe is pretty much tailored to your
specific need but it is probably hard to give it a really more general and
sensible meaning.


Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-09-24 Thread Pierre Gaston
On Sun, Sep 24, 2017 at 5:01 PM, Shlomi Fish  wrote:

> Hi all,
>
> With bash git master on Mageia v7 x86-64, bash on Debian Stable and other
> reported sytems:
>
> shlomif@telaviv1:~$ /home/shlomif/apps/bash/bin/bash -c 'run() { run; } ;
> run'
> Segmentation fault (core dumped)
> shlomif@telaviv1:~$
>

This, or some, variant has been reported multiple times.
Like in most programming languages, you can easily write programs that
behave badly,
in this case you are exhausting the stack has there is no tail call
optimization.

see for instance
https://lists.gnu.org/archive/html/bug-bash/2012-09/msg00073.html
and the following discussion
https://lists.gnu.org/archive/html/bug-bash/2012-10/threads.html#5


Re: extension of file-test primitives?

2017-08-23 Thread Pierre Gaston
On Wed, Aug 23, 2017 at 3:55 PM, L A Walsh  wrote:

>
>
> Greg Wooledge wrote:
>
>>
>>
>> They're not intended to work that way.  If you want to test f+x+s then
>> you just make another function:
>>
>> -fxs() { test -f "$1" && test -x "$1" && test -s "$1"; }
>>
>>
> How many different single-ops?  over 20?  That's 20 factorial
> combos.  You wanna include that in a script?  um...
>

You can use a loop, here is hack(ish) function that perhaps work (ie not
tested too much)

 testfile () {
local OPTIND=1 f=${!#}
while getopts abcdefghLkprsSuwxOGN opt;
  do
 case $opt in
   [abcdefghLkprsSuwxOGN]) test -$opt $f  || return 1;;
   *)return 1;;
 esac;
   done
 }

if testfile -fx file;then.


Re: Document bug of 'for' compound command

2017-08-18 Thread Pierre Gaston
On Fri, Aug 18, 2017 at 6:22 PM, vanou  wrote:

> Hello,
>
> I think, there is document bug related to 'for' compound command in both
> Man page and Info doc.
>
>
> In man page, description of 'for' compound command ...
>
> 
> 
> *  for (( expr1 ; expr2 ; expr3 )) ; do list ; done
> * First, the arithmetic expression expr1 is evaluated according
> * to the rules described below under ARITHMETIC EVALUATION.
> * The arithmetic  expression  expr2 is  then evaluated  repeatedly
> * until it evaluates to zero.
> <-- not zero, but 1
> * Each time expr2 evaluates to a non-zero value,
>  <-- not non-zero,but 0
> * list is executed and the arithmetic expression expr3 is evaluated.
> * If any expression is omitted, it behaves as if it evaluates to 1.
> <-- not 1, but 0
> * The return value is the exit status of the last command in list
> * that is executed, or false if any of the expressions is invalid.
> 
> 
>
>
> And same document bug in Info documentation of bash.
>
> This bug is seen at bash 4.4.
>
> Thanks,
> Vanou
>
>
> The documentation seems ok, what makes you think the contrary?
eg
for ((;0;)); do echo foo;done # prints nothing
for ((;1;)); do echo foo;done # is infinte
for ((;;)); do echo foo;done # is also infinite


Re: Performance issue of find function in Gluster File System

2017-08-16 Thread Pierre Gaston
On Wed, Aug 16, 2017 at 11:02 PM, Zhao Li 
wrote:

> Hi,
>
> I found there is a big difference of time performance between "ls" function
> and "find" function in Gluster File System
>  ide/GlusterFS%20Introduction/>.
> Here is the minimal working example.
>
> mkdir tmp
> touch tmp/{000..300}.txt
>
> time find ./ -path '*tmp*' -name '*.txt'> /dev/null
> real 0m42.629s
> user 0m0.675s
> sys 0m1.438s
>
> time ls tmp/*.txt > /dev/null
> real 0m0.042s
> user 0m0.003s
> sys 0m0.003s
>
> So I am wondering what C code you use for "ls" and "find" and how you
> explain "*" in "ls" and "find" to lead to this big difference in Gluster
> File System.
>
> Thanks a lot.
> Zhao
>

There are several differences, first note  that "ls" is not the one finding
the files. The shell is expanding *.txt then ls is passed all the files as
arguments.
*.txt is not recursive so only the files directly under /tmp will be search

In your find command, -path matches the whole path (/ included) and your
find command will descend in all the directories, whether they match tmp or
not, so depending on where you started to search from, it may search your
whole / partition.

A more comparable command would be:

find /tmp -name tmp -o -prune -name '*.txt' -print

or if your find command supports it:

find /tmp -maxdepth 1 -name '*.txt'

Note also that ls and find are separate tools that are not developed along
with bash.

For gnu find: https://www.gnu.org/software/findutils/
For gnu ls: https://www.gnu.org/software/coreutils/coreutils.html

But there are also other implementation for various systems.


Re: bash 4.4 null byte warning!!

2017-08-03 Thread Pierre Gaston
On Thu, Aug 3, 2017 at 1:56 PM, emlyn.j...@wipro.com 
wrote:

> Hi guys!
>
> My organization is receiving repeated alerts for using my mail id here :
> https://mail-archive.com/bug-bash@gnu.org/msg19561.html
>
> Could you please remove my maid id?
>
> Thank you!
>
> Regards,
> Emlyn Jose.
>
>
This is a public mailing list, moreover mirrored on usenet so there are
many, many copies of these emails that are publicly available about about
which the administrators of this list can do little about.

I'm afraid the site you point to is just one of these, it's a third party
site that makes the mailing list available to everybody.
You should try to contact them see: https://www.mail-archive.com/
faq.html#delete


Re: bash segfaults on a recursive command

2017-07-28 Thread Pierre Gaston
On Fri, Jul 28, 2017 at 5:01 PM,  wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc -I/home/abuild/rpmbuild/BUILD/bash-4.3
> -L/home/abuild/rpmbuild/BUILD/bash-4.3/../readline-6.3
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-suse-linux-gnu'
> -DCONF_VENDOR='suse' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
> -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib   -fmessage-length=0
> -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector
> -funwind-tables -fasynchronous-unwind-tables -g  -D_GNU_SOURCE
> -DRECYCLES_PIDS -Wall -g -Wuninitialized -Wextra -Wno-unprototyped-calls
> -Wno-switch-enum -Wno-unused-variable -Wno-unused-parameter
> -Wno-parentheses -ftree-loop-linear -pipe -DBNC382214=0
> -DIMPORT_FUNCTIONS_DEF=0 -fprofile-use
> uname output: Linux linux-wm1d.suse 4.4.74-18.20-default #1 SMP Fri Jun 30
> 19:01:19 UTC 2017 (b5079b8) x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-suse-linux-gnu
>
> Bash Version: 4.3
> Patch Level: 42
> Release Status: release
>
> Description:
> bash segfaults
>
> Repeat-By:
> eval $BASH_COMMAND
>
>
>
> This, or some variant, has come up multiple times. bash doesn't have
something like tail call optimization and it doesn't set arbitrary limits,
so at some point you exhaust the stack and it crashes.

For instance:
https://lists.gnu.org/archive/html/bug-bash/2014-08/msg00100.html
https://lists.gnu.org/archive/html/bug-bash/2015-09/msg00045.html


Re: Regression -- can't read input w/stderr redirect

2017-06-19 Thread Pierre Gaston
On Mon, Jun 19, 2017 at 5:17 AM, L A Walsh  wrote:

>
>
> Chet Ramey wrote:
>
>> On 6/18/17 6:59 PM, L A Walsh wrote:
>>
>>
>>> Chet Ramey wrote:
>>>
>>>
 Bash has always stripped NULL bytes. Now it tells you it's doing it.


>>> Why?  Did I toggle a flag asking for the warning?  Seems like it
>>> was sorta sprung on users w/no way to disable it.
>>>
>>>
>>
>> Users asked why bash transformed input without warning, even though it
>> had been doing that it's entire lifetime. A warning is appropriate.
>>
>>
> Maybe - but links to, at least, 2-3 users who filed bug reports about this
> problem in bug-bash would be appropriate as well to justify the inclusion
> of the text.
>
> I don't recall it ever coming up until the warning message was discussed
> as being unwelcome.  So please, I'd like to see the bug-report filings
> where this
> happened.
>
>
> 
>>>But things are changing -- people have asked for zero-terminated
>>> read's
>>> and readarrays.  More unix utils are offering NUL termination as an
>>> option
>>> because newlines alone don't cut it in some instances.
>>>
>>>
>>
>> And bash provides mechanisms to deal with the relatively few use cases
>> where it is a problem.
>>
>> Recall that the only thing that has changed is that bash now provides a
>> warning about what it's doing.
>>
>>
> Oh?  I want to read in a value from the registry where something may have
> zeros
> after the data.  Please tell me the mechanism to read this in w/no warnings
> that won't silence more serious cases of zero's other than at the end of
> line.
>
> I want to see the hyperlinks to archived bug-discussions on bug-bash where
> users complained about this and where it was at the end of a string where
> they expected to be able to read past a binary-0 in the input.
>
> I know I would have like the ability to read binary data into to a var that
> might "include a NUL", but I don't recall ever complaining about
> end-of-string
> NUL's being trimmed -- and it was drummed home to me how the null's were
> the
> end of the string -- not how bash read everything but nulls from input.
>
>
I'm sorry  to say that your behavior on this list is just not acceptable.

If you were on IRC I would have banned you much earlier, yet after all
these years of trolling the list, Chet is going to great length to
explain the rationale of his choices while you keep whining for the
shell to just do what you want in the random particular case you
happen to work at the moment.

Your response: you accuse him to lie to you.

I don't think this is constructive in any way and I'm sure
that, even if Chet has probably experienced this kind of online
situation more than most, it's not a pleasant one.

Pierre.


Re: Storing NUL in variables

2017-06-10 Thread Pierre Gaston
On Sat, Jun 10, 2017 at 2:06 AM, George  wrote:

> On Fri, 2017-06-09 at 20:58 +0300, Pierre Gaston wrote:
>
> On Fri, Jun 9, 2017 at 8:40 PM, Peter & Kelly Passchier 
>  wrote:
>
>
>
> On 09/06/2560 23:38, L A Walsh wrote:
>
>
> Chet Ramey wrote:
>
>
>
>  Should mapfile silently drop the NULs?
>
>
>
> Maybe add a flag to ignore NUL bytes that could be used in the 'read'
> statement as well?  If not specified, keep same behavior?
>
>
>
> That sounds like it might be useful.
> It might be more desirable to change it to a newline instead of dropping
> it? (Or both, with different flags??)
>
>
>
>
> I feel this kind of magic behavior would result in hackish scripts or fill
> a somewhat rare niche at best.
> I'd rather have bash to fully handle arrays of byte, or nothing.
>
>
>
> I think allowing shell variables to contain NUL would be lovely. How about
> we make that happen?  :)
> (I would be up for writing a patch to do it, of course, though I have a
> few other things in the pipeline... A feature like this could take a fair
> bit of work depending on how far the implementation goes in supporting
> various things.)
>
> Of course such variables couldn't be exported (the NULs would be lost if
> the data were stored in an environment variable) and for compatibility,
> variables should probably support containment of NULs only if the caller
> specifically requests it with an argument to "declare" or "read".
>
> ...And then there is the problem of how to use such variables. They can't
> be exported as environment variables or passed as command arguments, or
> used as file names...  Essentially they'd be limited to use in I/O within
> the shell, and within a handful of built-in commands or shell functions
> equipped to properly handle that data.
>
> I think that this approach, capturing an arbitrary byte stream and then
> taking further actions to process or encode it, is preferable to the
> alternative of capturing the byte stream and simultaneously encoding it
> into a text format. In principle commands like "read" shouldn't transform
> the data they're given, they should just store it. (I think the fact that
> read requires the option "-r" to read data without transforming it is kind
> of unfortunate...)
>
> (That said, one could argue that it would be equally reasonable, or even
> more reasonable to implement an operation that simultaneously reads and
> encodes the data, and another that decodes the data and writes it out, and
> then any commands designed to perform operations on byte stream data in the
> shell (re-encode it in a different format, etc.) should simply use that
> first encoding as a common format for exchanging the data..  Given the
> limitations of the shell with respect to its ability to handle NUL in
> various contexts, I think it's a reasonable argument. I tend to prefer the
> idea of providing true shell support for capturing a byte stream because it
> makes it easier to write code that handles the data without having to
> build-in a parser to interpret the data first.)
>
> One option that might make a feature like this integrate into the shell
> better would be to store a captured byte stream as an integer array rather
> than as an atomic variable. The back-end implementation in this case could
> be very efficient, and the stored data would be manipulable using existing
> array syntax. The main limitation perhaps would be that one could not
> create an array of these arrays.
>

Without too much thinking about it, I'd  propose something like this:

- extend readarray (or maybe provide another builtin)  to read bytes with
an interface like the one of dd (block size, offset, skip) and store the
bytes in the array. eg:  readarray -b bs=1024 cs=100 byte_array 

Re: Trailing newlines disappear

2017-06-09 Thread Pierre Gaston
On Fri, Jun 9, 2017 at 8:40 PM, Peter & Kelly Passchier <
peterke...@passchier.net> wrote:

> On 09/06/2560 23:38, L A Walsh wrote:
> > Chet Ramey wrote:
> >>
> >>  Should mapfile silently drop the NULs?
> >
> > Maybe add a flag to ignore NUL bytes that could be used in the 'read'
> > statement as well?  If not specified, keep same behavior?
>
> That sounds like it might be useful.
> It might be more desirable to change it to a newline instead of dropping
> it? (Or both, with different flags??)
>

I feel this kind of magic behavior would result in hackish scripts or fill
a somewhat rare niche at best.
I'd rather have bash to fully handle arrays of byte, or nothing.


> And how about a shell option to not omit trailing newlines in command
> substitutions?? I find that very undesirable and unnecessary behaviour.
>
> Peter
>
> I would argue that dropping the trailing newlines is what most use and
want.
Even if they don't realize it, few people would expect:

var=$(wc -l file);echo "$var"

to print 2 lines.

Trailing newlines are often not that interesting.


Re: History Feature issue

2017-05-22 Thread Pierre Gaston
I'd suggest that you look into the loadable builtin examples in the tarball
and find ideas to write a couple of these.

For instance, you could try to write a Json parser that allows callback and
set some bash variables mirroring the Json in an associative array or
something like that.

There is little documentation so as a result you kind have to look around
in bash source code, and cool builtins could be a way to contribute to bash.

At least that's what I do ( https://github.com/pgas ), even though I have
no real ambition doing this :D

Pierre

On Mon, May 22, 2017 at 1:27 PM, Pranav Deshpande <
deshpande.v.pra...@gmail.com> wrote:

> Thank you.
> It seems that there is no read to implement this feature. Could you suggest
> me some bug so that I can get started?
> Something which is simple for a beginner?
>
> Regards,
> Pranav.
>
> On Mon, May 22, 2017 at 3:06 AM, Eduardo Bustamante 
> wrote:
>
> > On Sun, May 21, 2017 at 11:29 AM, Pranav Deshpande
> >  wrote:
> > > The lssue here: https://savannah.gnu.org/support/?109000 interests me.
> > > It's something that I have experienced while using the shell. I am
> > > interested in solving it
> >
> > That bug report lacks detail. There are already ways of sharing
> > history between multiple sessions. Some of the alternatives are
> > outlined here: https://unix.stackexchange.com/questions/1288/preserve-
> > bash-history-in-multiple-terminal-windows
> >
> > Give these a try first.
> >
> > > Any ideas so as how can I get started with the code base?
> >
> > If you're interested in the implementation, review:
> >
> > dualbus@debian:~/src/gnu/bash$ head -n1 bashhist.c builtins/history.def
> > ==> bashhist.c <==
> > /* bashhist.c -- bash interface to the GNU history library. */
> >
> > ==> builtins/history.def <==
> > This file is history.def, from which is created history.c.
> >
> > And if you want to go into even more detail, review the history
> > library as implemented by GNU readline (and bundled with bash):
> >
> > dualbus@debian:~/src/gnu/bash$ head -n1 lib/readline/hist*c
> > ==> lib/readline/histexpand.c <==
> > /* histexpand.c -- history expansion. */
> >
> > ==> lib/readline/histfile.c <==
> > /* histfile.c - functions to manipulate the history file. */
> >
> > ==> lib/readline/history.c <==
> > /* history.c -- standalone history library */
> >
> > ==> lib/readline/histsearch.c <==
> > /* histsearch.c -- searching the history list. */
> >
>


Re: {varname} redirection for a command or group leaves the file open

2017-05-19 Thread Pierre Gaston
On Sat, May 20, 2017 at 5:38 AM, Eduardo Bustamante 
wrote:

> On Fri, May 19, 2017 at 3:32 PM,   wrote:
> [...]
> > I'd really like to see Bash get on the right side of this issue - and
> > the sooner the better.
>
> There is no right side. Only two opposing viewpoints. I don't think
> it's enough to justify the change breaking backwards compatibility.
>
> Maybe, but I somehow feel it would rather fix fds leaking out there than
break something.


Re: {varname} redirection for a command or group leaves the file open

2017-05-10 Thread Pierre Gaston
On Wed, May 10, 2017 at 8:07 PM, Aldo Davide  wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: x86_64-pc-linux-gnu-gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
> -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
> -DSHELL -DHAVE_CONFIG_H   -I. -I./include -I. -I./include -I./lib
> -DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
> -DSTANDARD_UTILS_PATH='/bin:/usr/bin:/sbin:/usr/sbin'
> -DSYS_BASHRC='/etc/bash/bashrc' -DSYS_BASH_LOGOUT='/etc/bash/bash_logout'
> -DNON_INTERACTIVE_LOGIN_SHELLS -DSSH_SOURCE_BASHRC -O2 -march=native -pipe
> -Wno-parentheses -Wno-format-security
> uname output: Linux mycomputer 4.9.24 #8 SMP PREEMPT Tue Apr 25 11:19:58
> EEST 2017 x86_64 Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz GenuineIntel
> GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
>
> Bash Version: 4.4
> Patch Level: 12
> Release Status: release
>
> Suppose that you use a "varname" redirection when executing a simple
> command, e.g.:
>
> ls -lh /proc/self/fd {var}>/dev/null
>
> I was surprised to discover that the file descriptor remains open after
> the command has completed, as evidenced by issuing the following (works
> on linux) immediately afterwards:
>
> echo "var is $var"
> ls -lh /proc/$$/fd
>
> This is unlike what happens with standard redirections, e.g.:
>
> ls -lh /proc/self/fd 57>/dev/null
> ls -lh /proc/$$/fd
>
> The same problem exists when braces are used to group (possibly)
> multiple commands:
>
> { ls -lh /proc/self/fd; } {var}>/dev/null
> echo "var is $var"
> ls -lh /proc/$$/fd
>
> On the other hand, everything works just fine with subshells:
>
> (ls -lh /proc/self/fd) {var}>/dev/null
> echo "var is $var"
> ls -lh /proc/$$/fd
>
> As a side-note, in the subshell example the variable var will be
> undefined in the second line, but defined inside the subshell. With
> groups it will remain defined after the group:
>
> { echo "inside the grouping: $var1"; } {var1}>/dev/null
> echo "outside the grouping: $var1"
> (echo "inside the subshell: $var2") {var2}>/dev/null
> echo "outside the subshells: $var2"
>
> So in summary, I would expect groups to work like subshells, both in
> regards to closing the file descriptor but also in regards to the scope
> of the variable.
>
> See:
https://lists.gnu.org/archive/html/bug-bash/2012-11/msg00040.html
Pierre
PS: I'm with you ;)


Re: Brace expansion fail compilation

2017-04-26 Thread Pierre Gaston
On Wed, Apr 26, 2017 at 1:13 PM, Florian Mayer  wrote:

> $ echo $BASH_VERSION
> 4.4.12(1)-release
> $ echo $BASH_VERSION{nobraceexpansion}
> 4.4.12(1)-release{nobraceexpansion}
> $ echo ${BASH_VERSION}{brace,expansion}
> 4.4.12(1)-releasebrace 4.4.12(1)-releaseexpansion
> $ echo $BASH_VERSION{brace,expansion}
> => no output. Unexpected
> $ echo $BASH_VERSIONfoo
> => no output as expected
>
> Why does
> $ echo $BASH_VERSION{brace,expansion}
> produce no output?
>
> it's because brace expansion occurs before parameter expansion
so first
  echo $BASH_VERSION{brace,expansion}
is expanded to
  echo $BASH_VERSIONbrace $BASH_VERSIONexpansion
and these 2 variables are not set.

you can verify with:
$ BASH_VERSIONbrace=foo BASH_VERSIONexpansion=bar; echo
$BASH_VERSION{brace,expansion}
foo bar


Re: Syntax error near unexpected token `newline' within loops

2017-04-24 Thread Pierre Gaston
On Mon, Apr 24, 2017 at 1:59 PM,  wrote:

> (...)
>
> and yes
>
> V_NAME=Friday
> for (( INDEX=0; INDEX<$((10-${#V_NAME})) ; INDEX++ ))
> do
> echo $INDEX
> done
>
> does also work, nevertheless using $(...) in the very first example is
> allowed
>
>
or even  for (( INDEX=0; INDEX<(10-${#V_NAME}) ; INDEX++ ))
but otherwise yes, syntax looks ok


Re: Curious case of arithmetic expansion

2017-04-23 Thread Pierre Gaston
On Sun, Apr 23, 2017 at 4:07 PM, Florian Mayer  wrote:

> It does not matter, how this construct in this particular context is
> called.
> The difference between $(()) and (()) is that $(()) actually expands to
> something
> whereas (()) just executes a C-like expression. In (())
>  can also
> include assignments, as the bash manual that you properly cited, also
> elaborates on.
> You can do, for example, things like
> $ foo=2
> $ ((foo+=100)) # fo is now 102
> $ ((++(foo++)))
> or even
> $ ((foo++, foo++, foo++, foo++, foo+=100))
> and (oh boy why) even
> $ foo=(123 321)
> $ ((foo[0]++, foo[1]—))
>
> So I might have chosen the wrong subject text for this mail,
> but again, it does not matter whether those constructs actually *expand* to
> some string
> or not. The side effects are what matter here. And in my opinion those are
> not correct...
>

I understand what you want, I'm just explaining the result you get
and why it doesn't do what you want.
As it is, a variable can contain the name of another variable
but it can also contain any arbitrary string that is a correct arithmetic
expression.

So if you have:

foo=bar+14+baz
baz='moo*2'
moo=1

what shoud echo $((foo++)) do?

Your case is only really a special case, arguably having only indirection
instead of what we have
would perhaps have been a better idea, but it is this way for so long that
I doubt it will change.

PS: you can perhaps use name reference instead
moo=1;
declare -n foo=bar bar=moo
echo $((foo++))
echo $moo


Re: Curious case of arithmetic expansion

2017-04-23 Thread Pierre Gaston
On Sun, Apr 23, 2017 at 3:28 PM, Florian Mayer  wrote:

> What I’m saying is, that if bash does recursively apply expansion
> mechanisms on the identifiers until it can retrieve a number,
> it should do it symmetrically. That is,
> it should remember what chain of expansion had been necessary for
> a particular number to appear at the end of the expansion.
>
> So instead of
> 124 moo 123
> The echo command should produce
> bar moo 124
>
> (The expansion chain here was foo->bar->moo->123)
>
> It's because it's not really indirection, rather the content of the
> variable is evaluated:
>
> No it is really indirection. Bash even has a special (and very limited)
> syntax for that.
> Consider
> $ foo=bar; bar=moo
> You can get the string „moo“ through foo by using
> $ echo ${!foo}
>
> $ echo ${!!foo} # or something else does not work, though...
>
>
This is indirection indeed, but in arithmetic evaluation it's not.

Quoting the manual:

"The value of a variable is evaluated as an  arithmetic  expression
 when  it  is  referenced, or when a variable which has been given the
integer attribute using declare -i is assigned a value. "

Consider this:
foo=1+3
echo $foo
echo $((foo++))
echo $foo


Re: Curious case of arithmetic expansion

2017-04-23 Thread Pierre Gaston
On Sun, Apr 23, 2017 at 1:12 PM, Florian Mayer  wrote:

> Consider
>
> $ foo=bar; bar=moo; moo=123
> $ echo $foo $bar $moo
> => bar moo 123
> $ echo $((foo))
> => 123
> $ echo $foo $bar $moo
> => bar moo 123
> $ # so far so good but
> $ ((foo++))
> $ echo $foo $bar $moo
> => 123 moo 123
>
> Now my chain of indirections is broken…
>

It prints "124 moo 123" no?

It's because it's not really indirection, rather the content of the
variable is evaluated:

$ foo=1+2;echo $((foo))
3

In the last case it evaluates the value of foo, 123, and then increment the
value of foo and foo becomes 124


builtins.h missing include guard

2017-04-19 Thread Pierre Gaston
I'm toying with loadable builtins and I noticed that builtins.h does not
have include guard.

Basically I needed the definition of WORLD_LIST and I was using (not sure
if there is a better way):

#include 
#include 
#include 


That's all
Pierre


Re: ``shopt -s extglob'' and ``function @() { true; }''

2017-03-30 Thread Pierre Gaston
On Fri, Mar 31, 2017 at 7:00 AM, Clark Wang  wrote:

> There is a post on stackoverflow: http://stackoverflow.com/
> questions/43117707/bashs-strange-behavior-on-a-function-named/
>
> The *problem*:
>
> bash-4.4# shopt -s extglob
> bash-4.4# function @() { echo foo; }
> bash-4.4# @()
> foo
> bash-4.4# declare -f
> @() ()
> {
> echo foo
> }
> bash-4.4#
> bash-4.4# unset -f '@()'
> bash-4.4#
> bash-4.4# shopt -s nullglob
> bash-4.4# function @() { echo foo; }
> bash-4.4# @()
> bash-4.4# declare -f
> @() ()
> {
> echo foo
> }
> bash-4.4#
>
> So when extglob is on, @() is handled as a glob pattern which makes
> sense. But the behavior after shopt -s nullglob indicates that the glob
> pattern @() is not *filename-expand*ed for function @(). This looks kind
> of counter-intuitive to me.
>
> Bug or feature?
>
> -clark
>


> ​
>
Since the manual does not document special chars as allowed in function
names, it can be whatever :D

"function" is a keyword not a builtin, so it can change the rules of
parsing, like trying to parse a function name instead of performing the
normal expansions (that's also why you can't do things like  var=foo;
function $var { echo foo; } )

 I'm more surprised that @() runs the function at all. (It seems to hang my
4.3 bash here)


Re: process substitution flawed by design

2017-02-21 Thread Pierre Gaston
On Tue, Feb 21, 2017 at 3:18 PM, Florian Mayer  wrote:

> The following code assumes the lock to be in state not-taken before the
> snippet runs.
>
> echo foo  | tee \
> >(mutex --lock; echo before; cat; echo after; mutex --unlock) \
> >(mutex --lock; echo foobar; mutex --unlock) \
> > /dev/null | cat
>
> for mutex --lock I use a tool which I wrote myself. Since I created this
> tool, there is a small chance that an error inside that program is the
> cause for my problem, but that's rather unlikely. The same code works in
> zsh without a problem.
>
> Now, if the line runs, it sometimes produces the output
>
> before
> foo
> after
> foobar
>
> or
>
> foobar
> before
> foo
> after
>
> just as one would expect. However, the code occasionally just deadlocks.
> I already found out that deadlocks only occur if I try to read from stdin
> in one of the two >()-blocks.
>
> How could I try to debug this?
> Has this something to to with how the bash resumes it's work after being
> suspended?
> The only reason I can think of is that somehow cat never exits. Do you
> think that's a reasonable guess?
> And, moreover, cat that even happen?
>
> Like I said, the exact same code works in zsh out of the box without any
> issues.


It's not clear to me why one process should run before the other.
The calls to "mutex --lock" can run in parallel the kernel choose one.


Re: process substitution flawed by design

2017-02-21 Thread Pierre Gaston
On Tue, Feb 21, 2017 at 4:00 PM, Pierre Gaston 
wrote:

>
>
> On Tue, Feb 21, 2017 at 3:18 PM, Florian Mayer 
> wrote:
>
>> The following code assumes the lock to be in state not-taken before the
>> snippet runs.
>>
>> echo foo  | tee \
>> >(mutex --lock; echo before; cat; echo after; mutex --unlock) \
>> >(mutex --lock; echo foobar; mutex --unlock) \
>> > /dev/null | cat
>>
>> for mutex --lock I use a tool which I wrote myself. Since I created this
>> tool, there is a small chance that an error inside that program is the
>> cause for my problem, but that's rather unlikely. The same code works in
>> zsh without a problem.
>>
>> Now, if the line runs, it sometimes produces the output
>>
>> before
>> foo
>> after
>> foobar
>>
>> or
>>
>> foobar
>> before
>> foo
>> after
>>
>> just as one would expect. However, the code occasionally just deadlocks.
>> I already found out that deadlocks only occur if I try to read from stdin
>> in one of the two >()-blocks.
>>
>> How could I try to debug this?
>> Has this something to to with how the bash resumes it's work after being
>> suspended?
>> The only reason I can think of is that somehow cat never exits. Do you
>> think that's a reasonable guess?
>> And, moreover, cat that even happen?
>>
>> Like I said, the exact same code works in zsh out of the box without any
>> issues.
>
>
> It's not clear to me why one process should run before the other.
> The calls to "mutex --lock" can run in parallel the kernel choose one.
>
 ah sorry I read too fast your example, you are expecting that.


Re: Does bash treat segment fault causing by scripts as security bugs ?

2017-02-15 Thread Pierre Gaston
I'm re-adding the list.

On Wed, Feb 15, 2017 at 4:34 PM, kkk K <3n4t...@gmail.com> wrote:

> What If I find a bug bypassing the FUNCNEST limitation ?
> I mean I found a bug which about some paser logic in bash,
> finially It will crash bash, And FUNCNEST cannot stop it from crashing
> bash.
>
>
I think you should feel free to submit your bug report, since the number of
reports is low, false reports are not a problem and you may have a genuine
bug.






> 2017-02-15 21:01 GMT+08:00 Pierre Gaston :
>
>>
>>
>> On Wed, Feb 15, 2017 at 11:44 AM, kkk K <3n4t...@gmail.com> wrote:
>>
>>> for example,
>>> simple bash recur function call:
>>>
>>> ==
>>> #!/bin/bash
>>>
>>> function test()
>>> {
>>> test $1
>>> }
>>>
>>> test 1
>>> ==
>>>
>>> sincerely for your reply
>>>
>>>
>> bash has a special variable FUNCNEST to limit the recursion if you want
>> to prevent infinite recursion.
>>
>> However, this subject has been discussed multiple times, it's easy to
>> write arbitrary code that crashes bash (not to mention the whole machine).
>> This doesn't necessarily mean that there is a bug in bash, but in your
>> code.
>>
>> If you can run arbitrary code in a shell (or even if your script doesn't
>> validate its input), your security is already compromised.
>>
>>
>>
>


Re: Does bash treat segment fault causing by scripts as security bugs ?

2017-02-15 Thread Pierre Gaston
On Wed, Feb 15, 2017 at 11:44 AM, kkk K <3n4t...@gmail.com> wrote:

> for example,
> simple bash recur function call:
>
> ==
> #!/bin/bash
>
> function test()
> {
> test $1
> }
>
> test 1
> ==
>
> sincerely for your reply
>
>
bash has a special variable FUNCNEST to limit the recursion if you want to
prevent infinite recursion.

However, this subject has been discussed multiple times, it's easy to write
arbitrary code that crashes bash (not to mention the whole machine).
This doesn't necessarily mean that there is a bug in bash, but in your code.

If you can run arbitrary code in a shell (or even if your script doesn't
validate its input), your security is already compromised.


Re: echo -n

2017-02-02 Thread Pierre Gaston
On Thu, Feb 2, 2017 at 11:02 AM, Sangamesh Mallayya <
sangamesh.sw...@in.ibm.com> wrote:

> Hi,
>
> description:
> in bash echo -n , echo -e , echo -E has a special meaning. But we do not
> have a way in bash shell if we want to print
> -n , -e and -E using echo command. Other shells supports printing of
> -n/-e/-E options using echo command.
>
> For example
>
> with ksh
> # echo -n
> -n
> #
>
> with bash
> # echo -n
>
> #
>
> Please let us know if this a bug or do we have any other option to print
> -n ?
>
> Here is the environment details.
>
> version: bash 4.3
> Hardware and Operating System P7 AIX
> Compiled with AIX xlc
>
> Thanks,
> -Sangamesh
>
>
>
>
Not a bug, echo is not portable and posix recommends using printf e.g.

printf '%s\n' -n


Re: Terminal stop when a group of pipelines is piped to a pager

2016-09-28 Thread Pierre Gaston
On Wed, Sep 28, 2016 at 10:35 AM, Abhijit Dasgupta 
wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
> -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
> -DSHELL -DHAVE_CONFIG_H   -I.  -I../. -I.././include -I.././lib
> -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector --param=ssp-buffer-size=4
> -Wformat -Werror=format-security -Wall
> uname output: Linux rho 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8
> 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
>
> Bash Version: 4.3
> Patch Level: 11
> Release Status: release
>
> Description:
>
>   The following stopping issue is seen in recent bash versions:
>
> $ { cmd_a; cmd_b | cmd_c; } | pager_prog
>
> [1]+  Stopped  { cmd_a; cmd_b | cmd_c; } | pager_prog
>
>   What is seen:  If a group of commands (in the above form) has its output
>   piped to a pager program (e.g. more, less, etc), then they get stopped
>   (by a SIGTTIN/SIGTTOU signal, while pager_prog accesses the tty).  This
>   happens if the first command of the group (cmd_a) is not a bash builtin
>   AND a pipeline occurs later in the group (cmd_b | cmd_c).
>
>   Puzzlingly, the issue does not arise if in the command group either the
>   first command (cmd_a) is a bash builtin or if none of the later commands
>   contains a pipe.
>
> Repeat-by:
>
>   Specific examples to reproduce/illustrate the issue:
>
> { /bin/echo "Users"; who | sort; } | more# Gets stopped
> { builtin echo "Users"; who | sort; } | more  # Works fine
>
> { date; who | sort; } | more # Gets stopped
> { who | sort; date; } | more # Works fine
>
> Workaround:
>
>   The problem goes away if we enclose the individual pipelines
>   within the group (or the entire group) with a sh -c '...'.
>
> Additional notes:
>
> - The problem is still seen if the commands are grouped using parentheses
>   (subshell) instead of braces, and also when pipelined commands are
>   repeated in a loop, e.g.:
>
> for n in 1 2 ; do cmd_a | cmd_b ; done | pager_prog
>
> [1]+  Stopped  for n in 1 2;
> do
> cmd_a | cmd_b;
> done | pager_prog
>
> - The problem is seen in all terminal types (xterm, linux console VT, etc)
>
> - The problem is seen in bash versions 4.3.46 and 4.3.11, but not in 4.1.5.
>
>
> Thanks,
>
> Abhijit Dasgupta
>
> I cannot reproduce with either 4.3.46 or 4.3.11,  but my system is 32bits
and not 64bits.


Re: redirection inside a process-substitution

2016-08-23 Thread Pierre Gaston
On Mon, Aug 22, 2016 at 10:38 PM,  wrote:

> When doing redirection inside a sub-process to a descriptor that is
> redirected to a file the output of the subshell goes into the file.
>
> Now when the same descriptor is again redirected to another descriptor for
> this whole
> command-list, the output of the sub-process goes to the other descriptor.
>
> Except when the subshell is subject to process-substitution: In this
> case the outer redirection has no effect to the sub-process. Why is that?
>
> Example:
>
> rm -f tb.err
> exec 3>tb.err
> echo -- 1 --
> (echo 1 1>&3) 3>&1
> echo tb.err ...
> cat tb.err
> echo -- 2 -
> echo >(echo 2 1>&3) 3>&1
> echo tb.err ...
> cat tb.err
> echo -- 3 -
> echo >(echo 3 1>&3)
> echo tb.err ...
> cat tb.err
>
> Only test 3 should print 3 into tb.err. bash and ksh93 also print
> into tb.err in test 2, which is inconsistent compared to case 1. What's so
> special about process-substitution regarding redirection?
>
> -Helmut
>

Case 1 is totally different from the other two. 3>&1 redirects the fds for
the coumpound command (  )

Case 2 is not really special, the redirection applies to the command (the
first echo) not to its arguments in the same way  echo $(echo 4 1>&3) 3>&1
prints in the file

There is also another case that may be interesting:

echo 3>&1 > >(echo 5 1>&3)

in which 5 is not printed in the file, which makes sense to me as the
process substitution is part of the redirections and not of the arguments.


Re: smart indented HERE docs

2016-08-21 Thread Pierre Gaston
On Mon, Aug 22, 2016 at 6:47 AM, Derek Schrock  wrote:

>
> Would it be possible to add a new character (+) to the here-doc such
> that the number of tabs remove are the number after the closing
> delimiter (EOF in the above example):
>
> 
> cat <<+ EOF
> Testing 0 9 8
> Testing 7 6 5
> Testing 4 3 2
> EOF
>
> Note the above here-doc is tabbed out by two but the closing delimiter
> is only tabbed by 1 so only 1 tab is removed.
>
> Would result with:
>
> Testing 0 9 8
> Testing 7 6 5
> Testing 4 3 2
>
> This would allow you to write here-docs inside functions and continue to
> use tabs to maintain proper formatting and not have to use spaces.
>
>
I think it is a bit cumbersome and hard to see this way, as well as a bit
difficult to maintain this way

Would it not be more simple to just  use the number of tabs (or spaces for
tab haters) before the first non tab character of the first line of the
here doc?
I think most people would be happy with this kind of "do what I mean"
behaviormaybe except  those using + as the first char of their
delimiter of course .


Re: echo builtin command will give the wrong value of the variable when there is a file named: 11 in the current directory

2016-07-27 Thread Pierre Gaston
On Wed, Jul 27, 2016 at 2:28 PM, Lingfei Kong <466701...@qq.com> wrote:

> Another reproducer:
>
> # c='[1][1][1]'
> # touch 111
> # echo $c
> 111
> # rm 111
> # echo $c
> [1][1][1]
>
> -- Original --
> *From: * "Lingfei Kong";<466701...@qq.com>;
> *Date: * Wed, Jul 27, 2016 07:24 PM
> *To: * "bug-bash";
> *Subject: * echo builtin command will give the wrong value of the
> variable when there is a file named: 11 in the current directory
>
> *Description:*
>
> echo builtin command will give the wrong value of the variable  when there is 
> a file named: 11 in the current directory.
>
>
> *Version:*
>
> GNU bash, version 4.2.45(1)-release-by_tst_tlinux20_v1004 
> (x86_64-unknown-linux-gnu)
>
> GNU bash, version 4.1.2(1)-release-by_mupan_tlinux_v1004 
> (x86_64-unknown-linux-gnu)
>
> GNU bash, version 3.2.48(1)-release-by_tst_suse_31_v1004 
> (x86_64-unknown-linux-gnu)
>
>
> *How reproducible:*
>
> 100%
>
>
> *Steps to Reproduce:*
>
> # touch 11
> # c='[11761][1469504252]'
>
> *# echo $c11*
> # rm 11
> # echo $c
> [11761][1469504252]
>
> *Expected results:*
>
> # touch 11
> # c='[11761][1469504252]'
>
> *# echo $c   **[11761][1469504252]*
>
>
>
> Best Regards
>
> Lingfei
>
> It's acutally a feature, juste like "echo *" will print the list of
filenames matching *  "echo [a-z]*" will print the list of filenames
starting with a letter between a and z.

[]  defines a range of character like in almost every regular expression.

In your case you can disable matching the filenames by quoting your
expansion: echo "$c".


Re: Array variables still seen by test -v as unset even after assignment

2016-07-26 Thread Pierre Gaston
On Tue, Jul 26, 2016 at 8:29 PM,  wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: x86_64-pc-linux-gnu-gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
> -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL
> -DHAVE_CONFIG_H   -I. -I./include -I. -I./include -I./lib
> -DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
> -DSTANDARD_UTILS_PATH='/bin:/usr/bin:/sbin:/usr/sbin'
> -DSYS_BASHRC='/etc/bash/bashrc' -DSYS_BASH_LOGOUT='/etc/bash/bash_logout'
> -DNON_INTERACTIVE_LOGIN_SHELLS -DSSH_SOURCE_BASHRC -march=corei7-avx -O2
> -pipe -fomit-frame-pointer
> uname output: Linux home 4.0.5-gentoo-i5 #11 SMP PREEMPT Thu Dec 31
> 00:13:56 MSK 2015 x86_64 Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz
> GenuineIntel GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
>
> Bash Version: 4.3
> Patch Level: 39
> Release Status: release
>
> Description:
> Unlike usual variables, that have been assigned an empty value,
> arrays are treated by test -v as if they weren’t assigned anything.
>
> Repeat-By:
>
> $ unset nothing; declare -p nothing; [ -v nothing ]; echo $?
> bash: declare: nothing: not found
> 1
>
> $ unset var; var= ; declare -p var; [ -v var ]; echo $?
> declare -- var=""
> 0
>
> $ unset arr; arr=(); declare -p arr; [ -v arr ]; echo $?
> declare -a arr='()'
> 1
>
> This has been reported and discussed several times.
bash consider an array as unset until at least one of its element has been
assigned a value.


Re: ordering of printed lines changes when redirecting

2016-07-18 Thread Pierre Gaston
On Mon, Jul 18, 2016 at 11:22 AM, walter harms  wrote:

>
> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc -I/home/abuild/rpmbuild/BUILD/bash-4.2
> -L/home/abuild/rpmbuild/BUILD/bash-4.2/../readline-6.2
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-suse-linux-gnu'
> -DCONF_VENDOR='suse' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
> -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib
> -fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2
> -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g
> -D_GNU_SOURCE -DRECYCLES_PIDS -Wall -g -Wuninitialized -Wextra
> -Wno-unprototyped-calls -Wno-switch-enum -Wno-unused-variable
> -Wno-unused-parameter -ftree-loop-linear -pipe -DBNC382214=0 -fprofile-use
> uname output: Linux omnfr121 4.5.0-4.g3d86af7-default #1 SMP PREEMPT Fri
> Mar 18 13:03:45 UTC 2016 (3d86af7) x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-suse-linux-gnu
>
> Bash Version: 4.2
> Patch Level: 47
> Release Status: release
>
> Description:   ordering of printed lines changes when merging stdin/stdout
> and redirecting
>
>
The problem is that buffering of stdout changes depending on where it goes
(line buffered in a terminal, fully buffered in a file), see for instance:

http://www.pixelbeat.org/programming/stdio_buffering/

This should be documented in your libc library.

You could flush stdout in your program, for other workarounds see
http://mywiki.wooledge.org/BashFAQ/009


Re: Officially document that we allow other characters in function names

2016-06-27 Thread Pierre Gaston
On Mon, Jun 27, 2016 at 8:15 PM, Pierre Gaston 
wrote:

>
>
> On Mon, Jun 27, 2016 at 7:17 PM, konsolebox  wrote:
>
>> On Mon, Jun 27, 2016 at 10:41 PM, Chet Ramey  wrote:
>> > On 6/27/16 3:11 AM, konsolebox wrote:
>> >> Hi, I think it's time that we officially specify in the manual of Bash
>> >> that we allow other characters besides [[:alnum:]_] when declaring
>> >> function names in non-POSIX mode.
>> >
>> > Is there some new reason to do this now?
>> >
>>
>> Not really, but sometimes I encounter people saying such practice of
>> using characters besides those allowed by POSIX is wrong simply
>> because it is undocumented.  I just thought about making a suggestion
>> today, and hope that it gets updated before 4.4.
>>
>> --
>> konsolebox
>>
>> Chet is one of these people ;)
> https://lists.gnu.org/archive/html/bug-bash/2011-04/msg00040.html
>

Maybe it's possible to explicitly allow, if not all, some characters  for
instance one of the rare bash style guide out there:

https://google.github.io/styleguide/shell.xml#Function_Names

suggest using :: for separating library names


Re: Officially document that we allow other characters in function names

2016-06-27 Thread Pierre Gaston
On Mon, Jun 27, 2016 at 7:17 PM, konsolebox  wrote:

> On Mon, Jun 27, 2016 at 10:41 PM, Chet Ramey  wrote:
> > On 6/27/16 3:11 AM, konsolebox wrote:
> >> Hi, I think it's time that we officially specify in the manual of Bash
> >> that we allow other characters besides [[:alnum:]_] when declaring
> >> function names in non-POSIX mode.
> >
> > Is there some new reason to do this now?
> >
>
> Not really, but sometimes I encounter people saying such practice of
> using characters besides those allowed by POSIX is wrong simply
> because it is undocumented.  I just thought about making a suggestion
> today, and hope that it gets updated before 4.4.
>
> --
> konsolebox
>
> Chet is one of these people ;)
https://lists.gnu.org/archive/html/bug-bash/2011-04/msg00040.html


Re: Leak in BASH "named" file descriptors?

2016-04-13 Thread Pierre Gaston
On Wed, Apr 13, 2016 at 3:51 PM, Chet Ramey  wrote:

> On 4/13/16 1:54 AM, George Caswell wrote:
>
> > Personally, I don't think it makes sense for a redirection on a command
> to
> > persist beyond the scope of that command. A redirection with a
> > dynamically-assigned fd is basically equivalent to a redirection to a
> > numbered fd.
>
> Then why have it?  There's not enough value in making a construct that
> is slightly different but functionally equivalent available unless it
> provides something new.
>
>
For me the value is in 1) not hard coding the number and 2) being able to
use more explicit names (eg "logfile" rather than "3"), nothing more.

Of course if you use {var} for the redirections of an external command it's
useless but not using a hard coded number can be useful if you use
functions and don't want to have conflicts with someone else's function.

I don't really understand why using a symbolic name would need to provide
more control, and in my opinion  {var}> doesn't really provide something
you can't do otherwise regarding the handling of the fd, it just has a
different behavior.


Re: bash "while do echo" can't function correctly

2016-04-13 Thread Pierre Gaston
On Wed, Apr 13, 2016 at 2:34 PM, John McKown 
wrote:

> On Wed, Apr 13, 2016 at 1:10 AM, Geir Hauge  wrote:
>
> ​...
>
>
>> though printf should be preferred over echo:
>>
>> while read -r line; do printf '%s\n' "$line"; done < test.txt
>>
>
> ​I've never read about using printf in preference to echo. Why is that? ​I
> have used it myself in special cases, such as wanting leading zeros
> (i=0;printf '%03d\n' "${i}";)
>


Posix says:

It is not possible to use *echo* portably across all POSIX systems unless
both *-n* (as the first argument) and escape sequences are omitted.
The *printf*

utility can be used portably to emulate any of the traditional behaviors of
the *echo* utility as follows (assuming that *IFS* has its standard value
or is unset)


Re: mv to a non-existent path now renames instead of failing

2016-03-19 Thread Pierre Gaston
On Thu, Mar 17, 2016 at 1:37 PM,  wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-unknown-linux-gnu'
> -DCONF_VENDOR='unknown' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
> -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib  -D_FORTIFY_SOURCE=2
> -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong
> -DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/bin'
> -DSTANDARD_UTILS_PATH='/usr/bin' -DSYS_BASHRC='/etc/bash.bashrc'
> -DSYS_BASH_LOGOUT='/etc/bash.bash_logout'
> uname output: Linux korath.teln.shikadi.net 4.5.0-1-drm-intel-nightly #1
> SMP PREEMPT Sun Mar 13 10:42:04 AEST 2016 x86_64 GNU/Linux
> Machine Type: x86_64-unknown-linux-gnu
>
> Bash Version: 4.3
> Patch Level: 42
> Release Status: release
>
> Description:
> Moving a directory to a non-existent path will rename the
> directory instead
> of reporting that the destination directory does not exist.
>
> Repeat-By:
> rmdir two 2> /dev/null
> mkdir one
> mv one two/
>
> This should (and did in earlier versions) return an error, since
> the "two"
> directory does not exist, so the "one" folder cannot be moved
> inside of it.
> If the trailing slash was left off "two/", the command should (and
> does,
> and always did) rename the folder.  However recently the command
> with the
> trailing slash has started renaming the folder instead of
> returning an
> error.
>
> I often rely on the error result so that I don't have to check
> whether the
> destination directory exists before performing the move operation,
> but now
> the process will always succeed, silently becoming a rename
> operation
> unpredictably.  I have already lost a handful of folders this way,
> only
> realising later that they were renamed without warning when I did
> not
> intend them to be renamed.
>
>
>
Thanks for the report, however bash is not involved, it is just passing the
argument to the "mv" command.
I can reproduce the behaviour with gnu mv, so maybe you are also using this
version, in which case you should
report to https://lists.gnu.org/mailman/listinfo/bug-coreutils

(I can reproduce with mv version 5.97 from 2006 so it's probably not that
new)


Re: `${PARAMETER:OFFSET}' does not work for negative offset

2016-02-12 Thread Pierre Gaston
On Fri, Feb 12, 2016 at 10:22 AM, Ulrich Windl <
ulrich.wi...@rz.uni-regensburg.de> wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc -I/home/abuild/rpmbuild/BUILD/bash-4.2
> -L/home/abuild/rpmbuild/BUILD/bash-4.2/../readline-6.2
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-suse-linux-gnu'
> -DCONF_VENDOR='suse' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
> -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib   -fmessage-length=0
> -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector
> -funwind-tables -fasynchronous-unwind-tables -g  -D_GNU_SOURCE
> -DRECYCLES_PIDS -Wall -g -Wuninitialized -Wextra -Wno-unprototyped-calls
> -Wno-switch-enum -Wno-unused-variable -Wno-unused-parameter
> -ftree-loop-linear -pipe -DBNC382214=0 -fprofile-use
> uname output: Linux pc 4.1.15-8-default #1 SMP PREEMPT Wed Jan 20 16:41:00
> UTC 2016 (0e3b3ab) x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-suse-linux-gnu
>
> Bash Version: 4.2
> Patch Level: 47
> Release Status: release
>
> Description:
> `${PARAMETER:OFFSET}' does not work for negative offset; the complete
> parameter value is substituted
>
> The bug goes back to at least bash 3.2...
>
> Repeat-By:
> "X=ABC; echo ${X:-2}" outputs "ABC", and not "BC"
>

There is an ambiguity between ${param:-default} and ${param:n:m} when n is
negative
${X:(-2)} or ${X: -2} (with space) are possible workarounds


Re: Bug on function.

2015-12-08 Thread Pierre Gaston
On Tue, Dec 8, 2015 at 10:29 AM, Kelvin Tan Thiam Teck 
wrote:

> dumbass@Lucifer:~$ ./report.sh 'echo' 1 2 3 4 5 6 7 8 9 10
> param 1  : echo
> param 2  : 1
> param 3  : 2
> param 4  : 3
> param 5  : 4
> param 6  : 5
> param 7  : 6
> param 8  : 7
> param 9  : 8
> param 10  : echo0
> param 11  : echo1
> param 12  : echo2
> param 13  : echo3
> param 14  : echo4
> param 15  : echo5
> param 16  : echo6
> param 17  : echo7
> param 18  : echo8
>
> always hard to understand what you mean by a simple output...ok your other
problem is that you need:
echo "${11}" to display the 11th argument, echo $11 is really the same as
echo  "${1}"1, it appends a 1 after "$1"

set --  one two three four five six seven  eight nine ten eleven;echo
"$11";echo "${11}"
one1
eleven


Re: Bug on function.

2015-12-08 Thread Pierre Gaston
On Tue, Dec 8, 2015 at 9:58 AM, Kelvin Tan Thiam Teck 
wrote:

> dumbass@Lucifer:~$ ./report.sh "echo ln -s /sbin/halt; mv halt ;reboot8 ;*
> reboot*" AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA
> AAA AAA AAA AAA
> Before Passing Thru Function: echo ln -s /sbin/halt; mv halt ;reboot8 ;
> reboot AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA
> AAA AAA AAA
> reboot: Need to be root
> 9th:
> 10th: echo0
> 11th: echo1
> 12th: echo2
> 13th: echo3
> 14th: echo4
> 15th: echo5
> 16th: echo6
> 17th: echo7
> ./report.sh: line 29: echo8: command not found
> 19th: echo9
> 20th: ln0
> dumbass@Lucifer:~$
>

I think you misunderstand me, I'm not denying that you inject some code.
What I'm saying is that the bug is in your code.
Here is a simpler way to reproduce:

 cat inject
#!/bin/bash
function foo {
  "$2"
}

foo $*
$ ./inject "blah date"
Tue Dec  8 10:08:45 EET 2015

You can see that "date" is executed, but it's a bug in the script, $* is
split in 2 as it is supposed to and foo receives 2 arguments.

you can fix the bug using "$@"
$ vi inject
$ cat inject
#!/bin/bash
function foo {
  "$2"
}

foo "$@"
$ ./inject "blah date"
./inject: line 3: : command not found


Now the arguments are not split again and foo receives only one argument,
hence the error.

As I said, there are many pitfalls in shellscript that's why allowing
running a script with more privilege than the user have is dangerous.


Re: Bug on function.

2015-12-07 Thread Pierre Gaston
On Tue, Dec 8, 2015 at 9:16 AM, Kelvin Tan Thiam Teck 
wrote:

> Hi,
> Please try my payload on that script, before telling me what $@ and $*
> does. and see if my param1 injection will cause your system to reboot on
> 18th param. it has nothing to do with $@ & $*, it's another bugs on bash
> which i found out, similar to shockbash, except it's harder to execute due
> to the requirement for it to happen.
>
>
> Regards
> KT
>
>
But it's code injection because your script is badly written, it's not a
bug in bash.
It's badly written because without quotes around "$@" the parameters are
split into words and then you tell bash to execute one of these words.
Bash does what it is supposed to do in your example.

And yes, there are many many way to write a script that allows code
injections.

Shellshock was entirely different in that it allowed to inject code no
matter how the script was written..


Re: Lower case construction does not working properly

2015-11-24 Thread Pierre Gaston
On Tue, Nov 24, 2015 at 2:23 PM, Michael Kazakov 
wrote:

> Hellol.
> I have founded a bug in variable manipulation behavior of bash version
> 4.2.53.
> Constructions ${parameter,pattern} and ${parameter,,pattern} does not
> working properly:
> michael@kazakov:~> VAR=COLORADO
> michael@kazakov:~> echo ${VAR,c}
> COLORADO
>

c is a pattern that only matches a lower case c, so it will not match an
upper case C

$ VAR=COLORADO; echo ${VAR,C}
cOLORADO


As an aside shopt -s nocasematch and shopt -s nocaseglob don't seem to
apply, with both on:

$ VAR=COLORADO; echo ${VAR,[c]}
COLORADO


Re: Design question(s), re: why use of tmp-files or named-pipes(/dev/fd/N) instead of plain pipes?

2015-10-22 Thread Pierre Gaston
On Thu, Oct 22, 2015 at 5:57 AM, Linda Walsh  wrote:

>
> But only as a pointer to something one can do I/O on.
> You can't set any file attributes or metadata on "pipe:[]" It's not a
> real file somewhere.
>

Yes, it's not a regular file, but it not the less true that <( ) gives you
a string that can be used by applications that were written to deal with
regular files whose names are passed as an argument., or in cases where you
need more than one stream.

eg you can compare the output of two commands like:

cmp <(sort fileA) <(sort fileB)

cmp will receive 2 strings as arguments, use these strings as filenames and
will open the special files just like if they were regular files.

Yes they are not regular, yes they can not bee seeked and the trick will
not work in all cases, but for cmp it will work just as well as with
regular files.


Re: GNU Guile integration

2015-07-14 Thread Pierre Gaston
On Tue, Jul 14, 2015 at 11:13 AM, Charles Daffern 
wrote:

> On 14/07/15 06:49, Dmitry Bogatov wrote:
> > Guile is for situations, when script is mainly calls other programs,
> > but still needs moderately complex logic of text manipulation,
> > compraison and mapping. Recently I wrote script, that had to emulate
> > map(data structure). Well, I would prefer that is was part of Bash.
> Bash has associative arrays, which is the data structure other languages
> refer to as a map.
> > Second (possible) reason is that it allows Bash to be extended by every
> > user in Emacs way. After all, Guile was created for this to be possible.
> Bash has coproc, which allows 2-way communication with other processes
> including scripting language interpreters. (Whether that's a good idea
> or not is a different story, but it's possible.)
>

Bash is a language so sure it can do some kind of things...just not as many
things as guile.

I think adoption would be difficult considering even a useful loadable
builtin like "finfo" has not found its way into default installations, but
for instance I can imagine bash programmable completion could go another
level with an embedded interpreter that lets you access the readline
internals.

Other examples could be things like gnu parallel, sqlite3, or parsing xml
or json...of course there are external tools for this and you can compile a
loadable builtin, but being able to tinker with a small script and having
your query result directly into a bash array without further parsing is
nice too.


Re: Weird background crashing bug

2015-06-28 Thread Pierre Gaston
On Mon, Jun 29, 2015 at 5:54 AM, Braden Best  wrote:

> Re-send:
>
>
> I noticed it when I tried to branch an xterm off into multiple sessions
> and mistyped its name:
>
> `xter m&`
>
> So after experimenting with a ton of different scenarios I've come to this
> conclusion:
>
> * both xterm and gnome-terminal crash
>
> * a nested bash session also crashes returning me back to the previous
> shell where the wd is ~
>
> * does *not* crash in TTY, nor in nested session *within* TTY.
>
> * only happens when two or more (but not less) directories deep into home
> (~), for example, "~/Videos/movies/" or "~/Pictures/vacation/2009".
>
> *Running a non-existent command in the background while two or more
> directories deep into home (~) causes bash to crash, but only when in a
> terminal emulator*
>
> Why does this happen?
>
> Addendum:
>
> *The version number of Bash.*
> $ bash --version
> 4.3.11(1)-release (x86_64-pc-linux-gnu)
>
> *The hardware and operating system.*
> Aspire-XC-603G
> Ubuntu 14.04.1 LTS
>
> *The compiler used to compile Bash.*
> can't find that information. `info bash | grep gcc` gives me nothing
>
> *A description of the bug behaviour.*
> Described Above
>
>
> *A short script or ‘recipe’ which exercises the bug and may be used to
> reproduce it. *$ mkdir dir1
> $ mkdir dir1/dir2
> $ cd dir1/dir2
> $ nonexistentcommand &
>
> Using it as a script won't cause a crash. The crash only happens in
> interactive mode.
>
>
> On Sun, Jun 28, 2015 at 8:40 PM, Braden Best 
> wrote:
>
>> I noticed it when I tried to branch an xterm off into multiple sessions
>> and mistyped its name:
>>
>> `xter m&`
>>
>> So after experimenting with a ton of different scenarios I've come to
>> this conclusion:
>>
>> * both xterm and gnome-terminal crash
>>
>> * a nested bash session also crashes returning me back to the previous
>> shell where the wd is ~
>>
>> * does *not* crash in TTY, nor in nested session *within* TTY.
>>
>> * only happens when two or more (but not less) directories deep into home
>> (~), for example, "~/Videos/movies/" or "~/Pictures/vacation/2009".
>>
>> *Running a non-existent command in the background while two or more
>> directories deep into home (~) causes bash to crash, but only when in a
>> terminal emulator*
>>
>> Why does this happen?
>>
>> --
>> Braden Best
>> bradentb...@gmail.com
>> (505) 692 0947
>>
>
>
>
> --
> Braden Best
> bradentb...@gmail.com
> (505) 692 0947
>

I don't seem to be able to reproduce with 4.3.30(1)-release, just in case,
can you try it after running:

PS1='$ ' PROMPT_COMMAND=''; unset -f  command_not_found_handle


Re: Bug Reporting

2015-06-09 Thread Pierre Gaston
On Tue, Jun 9, 2015 at 11:16 AM, Avinash Thapa 
wrote:

> Hi, I'll report the bug via email only.
>
> In this, you are able to get the /etc/passwd file inside an error, this
> thing looks weird to me so I thought to report you this thing.
>
> Just write in your terminal
> bash -i '/etc/passwd' and hit enter and you'll see the passwd file is
> shown as error,
>
> Similarly if you have root privilages so write
> bash -i '/etc/shadow'
>
> you'll get shadow file as error. Please find the attached screenshot for
> the same. :)
>
> Thanks,
> Acid.
>

Well it tries to execute the file and show the lines where it gets an
error, it's a useful behavior when you try to write a script.

If you have read access to these files and can run bash, you can print them
anyway eg:

bash -c 'echo  "$(

Re: (read -r var) vs <(read -r var) behavior

2015-05-22 Thread Pierre Gaston
On Wed, May 20, 2015 at 4:12 AM, Chet Ramey  wrote:

> On 5/19/15 1:42 AM, Pierre Gaston wrote:
>
> > The question really is (I discussed this with him on IRC) why can you do:
> >
> > $ cat <(read var  > blah
> > blah
> >
> > but not:
> >
> > $ cat < <(read var  > bash: read: read error: 0: Input/output error
>
> I'm not sure where you can do this; I get EIO for both constructs on Mac
> OS X, Fedora 21, RHEL 6, and Solaris 11 (what I happened to have available
> today).
>
> The kernel returns -1/EIO because the shell started to run the process
> substitution is ignoring SIGTTIN and the process attempts to read
> from its controlling terminal (/dev/tty) while the terminal's process
> group is set to different process group (making that shell a background
> process group).
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRUc...@case.edu
> http://cnswww.cns.cwru.edu/~chet/
>

I get that on Centos 5.5 (linux 2.6.18) but I am indeed unable to reproduce
it on other systems.


Re: (read -r var) vs <(read -r var) behavior

2015-05-18 Thread Pierre Gaston
On Mon, May 18, 2015 at 10:26 PM, Chet Ramey  wrote:

> On 5/16/15 1:11 PM, marz...@gmail.com wrote:
>
> > Bash Version: 4.3
> > Patch Level: 30
> > Release Status: release
> >
> > Description:
> > from interactive shell running cat < <(read -r var) prints:
> >   bash: read: read error: 0: Input/output error
> >
> > on the other hand:
> > (read -r var)   reads chars from terminal stdin
> >
> >
> > Repeat-By:
> >  cat < <(read var)
>
> I'm not sure what the question is here.  The two constructs are totally
> different in effect and implementation.  The error comes because the
> process substitution is run asynchronously, in the same process group as
> the calling shell (though exactly which pgrp doesn't matter), and the
> `cat' process runs in a different process group and `owns' the terminal.
>
> In the second (subshell) example, none of these things is true.
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRUc...@case.edu
> http://cnswww.cns.cwru.edu/~chet/
>
> The question really is (I discussed this with him on IRC) why can you do:

$ cat <(read var 

Re: The restricted shell can be easily circumvented.

2015-04-04 Thread Pierre Gaston
On Sat, Apr 4, 2015 at 8:22 AM, David Bonner  wrote:

> Bash Bug Report
>
> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
> -DCONF_VENDOR='p$
> uname output: Linux LFS-BUILD 3.16.0-23-generic #31-Ubuntu SMP Tue Oct 21
> 17:56:17 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
>
> Bash Version: 4.3
> Patch Level: 30
> Release Status: release
>
> Description:
> The restricted shell opened by calling rbash or bash with the -r
> or --restricted option can be easily circumvented with the
> command 'chroot / bash' making the restricted shell useless
> because anyone can get out of it with this command.
>
> Repeat-By:
> 1:Open a restricted shell
> 2:Test with 'cd ..'
> 3:Use 'chroot / bash'
> 4:Test that you are no longer restricted with 'chroot / bash'
>
>
This has already been discussed in the mailing list, you should be able to
find previous discussions about this and the fact that bash -r is not an
all inclusive solution (eg
https://lists.gnu.org/archive/html/bug-bash/2012-01/msg00048.html ) .

However your example is not a very convincing one, you cannot use "cd" with
a restricted shell, so it's not clear what you are really using and it is
obvious that many commands will allow to not be restricted if they are made
available.


Re: bang-hash behavior change?

2015-02-26 Thread Pierre Gaston
On Thu, Feb 26, 2015 at 2:40 AM, Milo H. Fields 
wrote:

> Greetings,
>
> I noticed that the 'bang-hash'  behavior seems to have changed somewhere
> between bash version 4.1.17 and 4.3.33.
>
>
>
> e.g. for the script 'jnk' containing:
>
> echo "plain: !#"
>
> echo " parens: ${!#}"
>
>
>
> bash 4.3.33 (and others)
>
> $ sh jnk arg1 arg2
>
> plain: !#
>
> parents:
>
>
>
> bash 4.1.17:
>
> $ sh jnk arg1 arg2
>
> plain: !#
>
> parents: arg2
>
>
>
> Is this a bug?
>
>
>
>
>
what if you use "bash" instead of "sh"?
maybe your sh is not really bash and in any case you should use bash if you
rely on bash features.


Re: how to search for commands

2015-02-24 Thread Pierre Gaston
On Tue, Feb 24, 2015 at 1:51 PM,  wrote:

> hmm. but can I use a wildcard with any of them. For example search for all
> commands which contain the word "nice". Which would bring up ionice.
>

compgen -c | grep nice


Re: how to search for commands

2015-02-24 Thread Pierre Gaston
On Tue, Feb 24, 2015 at 7:11 AM, Dan Douglas  wrote:

> On Mon, Feb 23, 2015 at 10:50 PM,   wrote:
> > How do you search for commands? In powershell you have the get-command
> cmdlet. Is there anything equivalent in unix?
>
> Depends on the type of command. For shell builtins, bash has `help':
>
> $ help '*ad'
> Shell commands matching keyword `*ad'
>
> read: read [-ers] [-a array] [-d delim] [-i text] [-n nchars] [-N
> nchars] [-p prompt] [-t timeout] [-u fd] [name ...]
> Read a line from the standard input and split it into fields.
> ...
>
> To search for commands found in PATH (or functions or aliases) use
> `type'. See `help type' for how to use it.
>
> Searching for commands by package is OS-specific. e.g. in Gentoo
> `equery f -f cmd pkg' will show "commands" belonging to a package.
> Cygwin's equivalent is `cygcheck -l'. Pretty much every distro has
> something similar.
>
> --
> Dan Douglas
>
>
There's also "compgen -c"  that will list all things that bash things as a
"command" (which includes things like if), it lists the possible completion
so you can also list everything starting with a "f" with "compgen -c f"

(also there's help-b...@gnu.org for these kind of questions)


Re: how to search for commands

2015-02-24 Thread Pierre Gaston
Thanks for your useful input.

On Tue, Feb 24, 2015 at 12:54 PM, Hans J Albertsson <
hans.j.alberts...@gmail.com> wrote:

> Help in bash seems to do most of what's actually needed.
>
> Hans J. Albertsson
> From my Nexus 5
> Den 24 feb 2015 11:48 skrev "Hans J Albertsson" <
> hans.j.alberts...@gmail.com>:
>
> Powershell is a very good cmd language, so bash and other unix shells
>> might do well to adopt some ideas from there.
>>
>> Normally, cmd search is only done thru completion in Unix shells, which
>> was an idea from tops 20 exec on Digital Equipment mainframes and early
>> lisp machines.
>> Get-command does more than lexical completion, I think.
>>
>> Hans J. Albertsson
>> From my Nexus 5
>> Den 24 feb 2015 06:11 skrev "Dan Douglas" :
>>
>>> On Mon, Feb 23, 2015 at 10:50 PM,   wrote:
>>> > How do you search for commands? In powershell you have the get-command
>>> cmdlet. Is there anything equivalent in unix?
>>>
>>> Depends on the type of command. For shell builtins, bash has `help':
>>>
>>> $ help '*ad'
>>> Shell commands matching keyword `*ad'
>>>
>>> read: read [-ers] [-a array] [-d delim] [-i text] [-n nchars] [-N
>>> nchars] [-p prompt] [-t timeout] [-u fd] [name ...]
>>> Read a line from the standard input and split it into fields.
>>> ...
>>>
>>> To search for commands found in PATH (or functions or aliases) use
>>> `type'. See `help type' for how to use it.
>>>
>>> Searching for commands by package is OS-specific. e.g. in Gentoo
>>> `equery f -f cmd pkg' will show "commands" belonging to a package.
>>> Cygwin's equivalent is `cygcheck -l'. Pretty much every distro has
>>> something similar.
>>>
>>> --
>>> Dan Douglas
>>>
>>>


Re: Nested calls to getopts incorrectly parses clustered options

2015-01-14 Thread Pierre Gaston
On Wed, Jan 14, 2015 at 12:35 PM, Øyvind 'bolt' Hvidsten 
wrote:

> Nobody else having issues with this?
> It's still a case in bash 4.3.30
>
>
> On 31/05/14 18:40, Øyvind Hvidsten wrote:
>
>> For a simple test:
>>
>> $ f() { local OPTIND=1 OPTARG OPTERR opt; while getopts ":abcxyz" opt;
>> do echo "opt: $opt"; if [[ "$opt" = "y" ]]; then f -a -b -c; fi; done;
>> }; f -x -y -z
>> opt: x
>> opt: y
>> opt: a
>> opt: b
>> opt: c
>> opt: z
>>
>> However, if the options are clustered:
>> $ f() { local OPTIND=1 OPTARG OPTERR opt; while getopts ":abcxyz" opt;
>> do echo "opt: $opt"; if [[ "$opt" = "y" ]]; then f -abc; fi; done; }; f
>> -xyz
>> opt: x
>> opt: y
>> opt: a
>> opt: b
>> opt: c
>> opt: x
>> opt: y
>> opt: a
>> opt: b
>> opt: c
>> opt: x
>> opt: y
>> opt: a
>> opt: b
>> opt: c
>> etc
>>
>> It's important to note that this happens even if f() doesn't call
>> itself, but rather calls some other function that also uses getopts. The
>> clustering of the inner set of options (-abc) is also not important -
>> the internal index of $1 is reset to the beginning either way.
>>
>> Whatever variable tracks the index within a single clustered set of
>> options should probably also be exposed as a shell variable so it can be
>> set as local to the function. Or it should be so implicitly.
>>
>>
>> Øyvind
>>
>>
>>
>
>
it has been reported before, I guess chet didn't manage to work around it
yet
http://lists.gnu.org/archive/html/bug-bash/2012-01/msg00044.html


Re: Random loss of bash history

2014-10-10 Thread Pierre Gaston
On Fri, Oct 10, 2014 at 11:40 AM, Linda Walsh  wrote:

> You DID read the release notes and changes from 4.2->4.3.
>
> Someone had the bright idea that .. in 4.2, '0' meant no limit in
> history (in bash and readline)... but in 4.3, '0' means 0 and throw
> away history while negative values mean keep it all.
>
> Perhaps you were hit by this brilliant new feature -- no doubt
> a new POSIX blessing.
>

Your ironic stance won't help your case.
Especially when what you describe is not true, 0 in 4.2 means 0.

$ HISTSIZE=10
$ echo $BASH_VERSION
4.2.53(1)-release
$ history
  999  PS1=\$\
 1000  HIST_SIZE=10
 1001  echo $BASH_VERSION
 1002  history
$ HISTSIZE=0
$ history
$


Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Pierre Gaston
On Tue, Oct 7, 2014 at 8:45 PM, Linda Walsh  wrote:

>
>
> Greg Wooledge wrote:
>
>> OK, then use a function to give you an escapable block:
>>
>> declare -A ...
>> create_maps() {
>> cd "$sysnet" || return
>> for ifname in ...; do
>> hwaddr=$(<"$ifname"/address)
>> act_hw2if[$hwaddr]="$ifname"
>> act_hw2if[$ifname]="$hwaddr"
>> done
>> }
>> create_maps
>>
>>  Either way, they code as you have suggested won't work
>>> without overcoming another set of side effects.
>>>
>>
>> There are ways to NOT use subshells.  I have given you two of those
>> ways now.
>>
> 
>
> I appreciate the options, but the option I want is the parent
> staying
> "put", and only sending children out to change dir's.  Say some new
> version of init wants to isolate init processes by putting them in their
> own dir and then deleting the dir.  As long as they don't cd out of the
> dir, they are fine, but if they do, they can't get back to it.
>
> Why can't I spawn a child that cd's into a dir and reports back
> what
> it finds in the dir?  I do this in perl or C w/no problem.
>
> I have an iomonitor program, that monitors multiple data inputs.  For
> each, I spawn a child that sits in that directory, holding the FD open
> and rewinding it for updated status.  There's no constant
> opening/reopening of files -- no walking paths.  Each time you open
> a path, the kernel has to walk the path -- if cached, happens very
> quickly -- but still work as it also has to check access on each
> traversal (it can change).  If your data-gathering routines sit where
> the data is and don't move, there is no waisted I/O or CPU accessing the
> data and the parent gets updates via pipes.
>
> There is no fundamental reason why, say, process substitution needs to
> use /dev/fd or /proc/anything -- and couldn't operate exactly like piped
> processes do now.  On my first implementation of multiple IPC programs,
> I've used semaphores, message queues, named pipes, and shared memory.
>

That's where you are wrong, there is no reason for *your* use case, but the
basic idea behind process substitution is to be able to use a pipe in a
place where you normally need a file name.


Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Pierre Gaston
On Tue, Oct 7, 2014 at 12:00 AM, Linda Walsh  wrote:

According to Chet , only way to do a multi-var assignment in bash is
>
>
>>> read a b c d  <<<$(echo ${array[@]})
>>>
>>
>> The redundant $(echo...) there is pretty bizarre.  Then again, that
>> whole command is strange.  You have a nice friendly array and you are
>> assigning the first 3 elements to 3 different scalar variables.  Why?
>> (The fourth scalar is picking up the remainder, so I assume it's the
>> rubbish bin.)
>>
>> Why not simply use the array elements?
>>
>>  Forcing a simple assignment into using a tmp file seems Machiavellian --
>>> as it does exactly the thing the user is trying to avoid through
>>> unexpected means.
>>>
>>
>> You are working in a severely constrained environment.
>>
> That isn't the problem:  the assignment using a tmp file is:
>
>> strace -p 48785 -ff
>>
> Process 48785 attached
> read(0, "\r", 1)= 1
> write(2, "\n", 1)   = 1
> socket(PF_NETLINK, SOCK_RAW, 9) = 3
> sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=},
> msg_iov(2)=[{"*\0\0\0d\4\1\0\0\0\0\0\0\0\0\0", 16}, {"read a b c d
> <<<${arr[@]}\0", 26}], msg_controllen=0, msg_flags=0}, 0) = 42
> close(3)= 0
> -
> Um... it used a socket.. to transfer it, then it uses a tmp file on top
> of that?!  :
>
> rt_sigaction(SIGINT, {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0},
> {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 8) = 0
> open("/tmp/sh-thd-110678907923", O_WRONLY|O_CREAT|O_EXCL|O_TRUNC, 0600) = 3
> write(3, "one two three four", 18)  = 18
> write(3, "\n", 1)   = 1
> open("/tmp/sh-thd-110678907923", O_RDONLY) = 4
> close(3)= 0
> unlink("/tmp/sh-thd-110678907923")  = 0
> fcntl(0, F_GETFD)   = 0
> fcntl(0, F_DUPFD, 10)   = 10
> fcntl(0, F_GETFD)   = 0
> fcntl(10, F_SETFD, FD_CLOEXEC)  = 0
> dup2(4, 0)  = 0
> close(4)= 0
> ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS,
> 0x7fff85627820) = -1 ENOTTY (Inappropriate ioctl for device)
> lseek(0, 0, SEEK_CUR)   = 0
> read(0, "one two three four\n", 128)= 19
> dup2(10, 0) = 0
> fcntl(10, F_GETFD)  = 0x1 (flags FD_CLOEXEC)
> close(10)   = 0
> -
>
> Why in gods name would it use a socket (still of arguable overhead, when
> it could be done in a local lib), but THEN it duplicates the i/o in a file?
>
>  Thus, you need
>> to adapt your code to that environment.  This may (often will) mean
>> you must forsake your usual practices, and fall back to simpler
>> techniques.
>>
> 
> The above is under a normal environment.  It's still broken.
>
>
>
>
>
>> Maybe the best solution here is to move your script to a different part
>> of the boot sequence.  If you run it after all the local file systems
>> have been mounted, then you should be able to create temporary files,
>> which in turns means << and <<< become available, should you need them.
>>
> 
> Theoretically, they ARE mounted What I think may be happening
> is that $TMP is not set so it is trying to open the tmp dir in:
>
> "//sh-thd-183928392381" -- a network address.
>
>
>  Elegance must be the first sacrifice upon the altar, here.
>>
> ---
> Correctness before elegance.  1), use memory before OS services.
> 2) use in-memory services before file services, 3) Don't use uninitialized
> variables (TMP) -- verify that they are sane values before usage.
> 4) don't use network for tmp when /tmp or /var/tmp would have worked just
> fine.
>
>
>
>
>>  So why would someone use a tmp file to do an assignment.
>>>
>>
>> It has nothing to do with assignments.  Temp files are how here documents
>> (and their cousin here strings) are implemented.  A here document is a
>> redirection of standard input.  A redirection *from a file*
>>
> 
> Nope:
>  This type of redirection instructs the shell to  read  input  from  the
>current source until a line containing only delimiter (with no
> trailing
>blanks) is seen.  All of the lines read up to that point are then
> used
>as the standard input for a command.
>
> "The current source" -- can be anything that input can be
> redirected from -- including memory.
>
>
>  in fact,
>> albeit a file that is created on the fly for you by the shell.
>>
> 
> Gratuitous expectation of slow resources...  Non-conservation
> of resources, not for the user, but for itself.
>
> Note:
>
>> a=$(> echo "$a"
>>
> one
> two
> three
> ---
> no tmp files used, but it does a file read on foo.
>
> b=$a
> -- the above uses no tmp files.
>
> b=$(echo "$a")
> ---
> THAT uses no tmp files.
>
> But
> b=<<<$a
>
> uses a tmp file.
>
> That's ridiculously lame.
>

b=<<<$a is not 

Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Pierre Gaston
On Mon, Oct 6, 2014 at 10:38 PM, Linda Walsh  wrote:

> Greg Wooledge wrote:
>
>> On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote:
>>
>>>done <<<"$(get_net_IFnames_hwaddrs)"
>>>
>>
>>  Where am I using a HERE doc?
>>>
>>
>> <<< and << both create temporary files.
>>
>
>
> According to Chet , only way to do a multi-var assignment in bash is
>
> read a b c d  <<<$(echo ${array[@]})
>
> Forcing a simple assignment into using a tmp file seems Machiavellian --
> as it does exactly the thing the user is trying to avoid through
> unexpected means.
>
> The point of grouping assignments is to save space (in the code) and have
> the group initialized at the same time -- and more quickly than using
> separate assignments.
>
> So why would someone use a tmp file to do an assignment.
>
> Even the gcc chain is able to use "pipe" to send the results of one stage
> of the compiler to the next without using a tmp.
>
> That's been around for at least 10 years.
>
> So why would a temp file be used?
>
>
> Creating a tmp file to do an assignment, I assert is a bug.
>
> It is entirely counter-intuitive that such wouldn't use the same mechanism
> as LtR ordered pipes.
>
> I.e.
>
> cmd1 | cmd2 -- that hasn't used tmp files on modern *nix systems for
> probably 20 years or more (I think DOS was the last shell I knew that used
> tmp files...)
>
> so why would "cmd2 < <(cmd1 [|])" not use the same paradigm -- worse, is
>
> cmd1 >& MEMVAR   -- output is already in memory...
>
> so why would read a b c <<<${MEMVAR} need a tmp file if the text to be
> read is already in memory?
>
>
> Because it's not a simple assignment, it's using a mechanism to send data
to an external program and another one to read from a stream of data.

Some shell use the buffer of a pipe as an optimization when the amount of
data is small (which is probably the case of most heredocs/string).


Re: Issues with exported functions

2014-09-25 Thread Pierre Gaston
On Thu, Sep 25, 2014 at 1:04 PM, lolilolicon  wrote:

> On Thu, Sep 25, 2014 at 5:51 PM, Pierre Gaston 
> wrote:
> >
> >
> > On Thu, Sep 25, 2014 at 12:42 PM, lolilolicon 
> wrote:
> >>
> >> On Thu, Sep 25, 2014 at 7:19 AM, Linda Walsh  wrote:
> >> > lolilolicon wrote:
> >> >>
> >> >> I don't expect more than a dozen who rely on this... but bash
> >> >> programmers can be quite the perverts, so...
> >> >>
> >> >
> >> > Personally I find those who don't read the man page, and then claim
> that
> >> > documented
> >> > behavior is a "bug" are the real "perverts".  They expect documented
> >> > behavior to work
> >> > some way other than is documented... How is that not perverted?
> >>
> >> You're arguing "like a girl". I didn't say the documented behavior was a
> >
> >
> > uh?  really?
> > Please go away, it's already bad enough you are discussing things you
> don't
> > fully understand without being sexist on top of that.
>
> Isn't the whole point of discussing better understanding? If you have to
> fully understand a thing to be allowed to discuss it, then there will be
> no discussion allowed.
>
> You're too easily stoked. Please don't be so sensitive. Notice the double
> quotes? I'm using the stereotype as a shorthand. Stereotypes exist and
> are widely understood, much like idioms.
>
> In any event, this is but irrelevant to the discussion. Do not seize the
> red herring.
>

It is fully relevant when you use a sexist stereotype as an argument.


Re: Issues with exported functions

2014-09-25 Thread Pierre Gaston
On Thu, Sep 25, 2014 at 12:42 PM, lolilolicon  wrote:

> On Thu, Sep 25, 2014 at 7:19 AM, Linda Walsh  wrote:
> > lolilolicon wrote:
> >>
> >> I don't expect more than a dozen who rely on this... but bash
> >> programmers can be quite the perverts, so...
> >>
> >
> > Personally I find those who don't read the man page, and then claim that
> > documented
> > behavior is a "bug" are the real "perverts".  They expect documented
> > behavior to work
> > some way other than is documented... How is that not perverted?
>
> You're arguing "like a girl". I didn't say the documented behavior was a


uh?  really?
Please go away, it's already bad enough you are discussing things you don't
fully understand without being sexist on top of that.


Re: Issues with exported functions

2014-09-25 Thread Pierre Gaston
On Thu, Sep 25, 2014 at 11:06 AM, lolilolicon  wrote:

> On Thu, Sep 25, 2014 at 9:35 AM, Chet Ramey  wrote:
> > On 9/24/14, 3:44 PM, lolilolicon wrote:
> >
> >> Personally, I have never needed this feature. I would vote for its
> >> removal: It's very surprising, creates bugs, and is not very useful.
> >
> > There are more things in heaven and earth that are dreamt of in your
> > philosophy.
>
> OK guys! Exported functions are widely used by experts, I get it now.
>
> >
> >> Otherwise, if this feature is going to stay (can anyone enlighten me why
> >> it's useful?), please document it explicitly.
> >
> > Function export is documented.  The exact mechanism need not be.
>
> Sure, the mechanism need not be documented, if it didn't matter on the
> interface level. But it does. In particular,
>
> % pat='() { $:*;}' bash -c 'tr "$pat" _ <<< "(x){1}"'
> (x){1}
>
> (This is bash 4.3.25)
>
> This is not the best example, but you get the idea.
>
> Perhaps you have plans to change the implementation?
>
>
How many instance have you found since the introduction of this feature
more than 20 years ago?


Re: Difference between assignment via nameref vs `printf -v`?

2014-08-31 Thread Pierre Gaston
On Sun, Aug 31, 2014 at 7:19 PM, lolilolicon  wrote:

> On Sun, Aug 31, 2014 at 12:20 PM, lolilolicon 
> wrote:
> > Assignment to a subscripted array variable behaves differently for
> > nameref vs `printf -v`, as shown below.
> >
> > Assignment via nameref variable:
> >
> > declare -a arr=()
> > func() {
> >   local -n ref=$1
> >   ref='nameref'
> > }
> > func 'arr[0]'
> > declare -p 'arr[0]' arr
> >
> > --- output ---
> > declare -- arr[0]="nameref"
> > declare -a arr='()'
>
> It's a damn bug alright. Just curious though, shouldn't assignments to
> via nameref variables re-use the same code as `printf -v`?
>
>
Not sure what makes this bug a damned one, but it's probably because it
re-uses the same code that it doesn't make an extra check and creates this
invalid variable name 'arr[0]'


Re: Some kind of file descriptor overflow

2014-06-13 Thread Pierre Gaston
On Fri, Jun 13, 2014 at 9:56 PM, Jorge Sivil  wrote:

> Yes, sorry. The minimum reproduceable code is:
>
> #!/bin/bash
> function something() {
>   while true
>   do
> while read VAR
> do
>   dummyvar="a"
> done < <(find "/run/shm/debora" -type f | sort)
> sleep 3
>   done
> }
> something &
>
> Which fails with many pipes fd open.
>
> Changing the While feed to this:
>
> #!/bin/bash
> function something() {
>   find "/run/shm/debora" -type f | sort | while true
>   do
> while read VAR
> do
>   dummyvar="a"
> done
> sleep 3
>   done
> }
> something &
>
> Works completely normal.
>
> However, removing the call as function in background:
>
> #!/bin/bash
> while true
> do
>   while read VAR
>   do
> dummyvar="a"
>   done < <(find "/run/shm/debora" -type f | sort)
>   sleep 3
> done
>
> But executing the script with ./test.sh & (in background), works
> without problems too.
>
> On Fri, Jun 13, 2014 at 2:35 PM, Eduardo A. Bustamante López
>  wrote:
> > On Fri, Jun 13, 2014 at 09:52:49AM -0300, Jorge Sivil wrote:
> >> The script is in the answer:
> >>
> >>
> http://stackoverflow.com/questions/24192459/bash-running-out-of-file-descriptors
> > Can't you reduce the script to a minimum reproducible case? To be
> > honest, it smells like a scripting error and not a bug, but the code
> > in that answer is too large and with too many dependencies to be even
> > worth the time to execute.
>
>
>
> --
> Atte.: Jorge Sivil
>
> yes, there was a bug and it has been fixed in 4.3 as far as i can tell


Re: Arithmetic + array allows for code injection

2014-06-02 Thread Pierre Gaston
On Mon, Jun 2, 2014 at 4:44 PM, Chet Ramey  wrote:

> On 6/2/14, 8:21 AM, Greg Wooledge wrote:
> > On Fri, May 30, 2014 at 09:28:13PM -0500, Dan Douglas wrote:
> >> The problem is most people don't realize how "variables" are evaluated.
> >> Any time the shell needs to reference a variable, it takes a string
> >> like: "arr[$foo]" and, if there's an index, the string within the index
> >> gets processed for expansions. The arithmetic evaluator is no exception.
> >
> > I'm trying to understand this, but it's not clear to me yet.
> >
> > imadev:~$ x='$(date)'
> > imadev:~$ : $(($x))
> > bash: $(date): syntax error: operand expected (error token is "$(date)")
> >
> > That looks OK.
> >
> > imadev:~$ : $((a[$x]))
> > bash: Mon Jun 2 08:06:39 EDT 2014: syntax error in expression (error
> token is "Jun 2 08:06:39 EDT 2014")
> >
> > There's the code-injection problem that started the thread.
> >
> > imadev:~$ : ${a[$x]}
> > bash: $(date): syntax error: operand expected (error token is "$(date)")
> >
> > That also looks OK.
> >
> > Why is there no code injection in the last example?  There is an index.
> > According to your paragraph, "... the string within the index gets
> > processed for expansions. The arithmetic evaluator is no exception."
>
> The arithmetic evaluator is, in fact, an exception.  That, combined with
> the expansions that happen before the arithmetic evaluator gets hold of
> the expression -- and it is an expression -- leads to the difference.
>
> In the first case, the arithmetic evaluator sees `a[$(date)]' as the
> expression after parameter expansion is performed:
>
> "All tokens in the expression undergo parameter and variable expansion,
> command substitution, and quote removal.  The result  is  treated  as  the
> arithmetic expression  to  be evaluated."
>
> Since that expression looks like a variable expansion, the following
> sentence in the description of arithmetic evaluation is applicable:
>
> "Within an expression, shell variables may also be referenced by name
> without  using  the  parameter expansion  syntax.  A shell variable that
> is null or unset evaluates to 0 when referenced by name without using the
> parameter expansion syntax."
>
> The a[$(date)] is identified as an array index, so the $(date) is expanded
> like any other index, and evaluated as an expression.
>
> This is what lets you use things like 'x+1' and 'x[y+1]' in arithmetic
> expansions.
>
> The parameter expansion example (${a[$x]}) doesn't undergo that `extra'
> expansion.  The index that ends up being passed to the evaluator is `$x',
> which is expanded to `$(date)'.  That is treated as an expression and
> evaluation fails.
>
> Chet
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRUc...@case.edu
> http://cnswww.cns.cwru.edu/~chet/
>

Even if there is a perfectly good justification as to why this works, I
still think this is a terribly broken feature of the language.

The number of shell scripters, even experimented, that will realize that
$((a["$i"]))  makes code injection possible is probably very close to 0
while the first thing an script kid would do to create trouble is to embed
$( ) in his strings.


Re: Arithmetic + array allows for code injection

2014-05-30 Thread Pierre Gaston
On Fri, May 30, 2014 at 9:08 PM, Greg Wooledge  wrote:

> On Fri, May 30, 2014 at 08:57:42PM +0300, Pierre Gaston wrote:
> > It doesn't seem right for code looking as innocent as $((a[$i])) or
> > $((a["$i"])) to allow running arbitrary commands for some value of i,
> that
> > are no even that clever:
> >
> > $ i='$( echo >&2 an arbitrary command )';:  $((a["$i"]))
> > an arbitrary command
> >
> > $ i='"$( echo >&2 an arbitrary command)"';: $((a[$i]))
> > an arbitrary command
>
> A workaround is to avoid the explicit $i inside the square brackets:
>
> imadev:~$ i='$(date)'; : $((a[$i]))
> bash: Fri May 30 14:05:34 EDT 2014: syntax error in expression (error
> token is "May 30 14:05:34 EDT 2014")
> imadev:~$ i='$(date)'; : $((a[i]))
> bash: $(date): syntax error: operand expected (error token is "$(date)")
>
> I don't dispute the need to fix it, though.
>

Right, in fact when this bug was found when playing with associative arrays
where this workaround is not possible, with declare -A a  you can use
$((${a["$i"]}))


Arithmetic + array allows for code injection

2014-05-30 Thread Pierre Gaston
It doesn't seem right for code looking as innocent as $((a[$i])) or
$((a["$i"])) to allow running arbitrary commands for some value of i, that
are no even that clever:

$ i='$( echo >&2 an arbitrary command )';:  $((a["$i"]))
an arbitrary command

$ i='"$( echo >&2 an arbitrary command)"';: $((a[$i]))
an arbitrary command


Re: Bind builtin does not run readline commands

2014-05-27 Thread Pierre Gaston
On Tue, May 27, 2014 at 8:19 AM,  wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: x86_64-pc-linux-gnu-gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
> -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL
> -DHAVE_CONFIG_H   -I. -I./include -I. -I./include -I./lib
>  
> -DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
> -DSTANDARD_UTILS_PATH='/bin:/usr/bin:/sbin:/usr/sbin'
> -DSYS_BASHRC='/etc/bash/bashrc' -DSYS_BASH_LOGOUT='/etc/bash/bash_logout'
> -DNON_INTERACTIVE_LOGIN_SHELLS -DSSH_SOURCE_BASHRC -march=corei7-avx -O2
> -pipe -fomit-frame-pointer --param l1-cache-size=32 --param
> l1-cache-line-size=64 --param l2-cache-size=6144
> uname output: Linux home 3.13.10-geek-i5 #2 SMP PREEMPT Wed May 21
> 23:26:16 MSK 2014 x86_64 Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz
> GenuineIntel GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
>
> Bash Version: 4.2
> Patch Level: 45
> Release Status: release
>
> Description:
> The syntax of the 'bind' builtin command suggests that any
> readline command may be executed just by passing command name to the 'bind'
> in accordance with 'man bash'. However,
>


it looks like a man page bug, instead of

bind [-m keymap] keyseq:function-name
bind readline-command

"help bind" shows:

bind: bind [-lpvsPVS] [-m keymap] [-f filename] [-q name] [-u  name] [-r
keyseq] [-x keyseq:shell-command] [keyseq:readline-function or
readline-command]

and it's pretty clear from the description that bind can only bind keys and
set readline variables


Re: winch trap delayed until keypress

2014-05-22 Thread Pierre Gaston
On Thu, May 22, 2014 at 4:02 PM, Linda Walsh  wrote:

>
>
> Pierre Gaston wrote:
>
>>
>> As I understand it, this is now broken in 4.3?:
>> 
>> # display new size of terminal when resized
>> function showsize () {\
>>   local s=$(stty size); local o="(${s% *}x${s#* })"; s="${#o}";\
>>   echo -n $o; while ((s-- > 0));do echo -ne "\b"; done; \
>> }
>>
>> trap showsize SIGWINCH
>> 
>>
>>  As I understand it, it only affects the interactive behaviour of bash,
>> and doesn't change anything for your use case, the trap will be executed
>> when the foreground command exits.
>>
>
> ---
> If I wanted to "see" the size interactively, I wouldn't resize it unless
> I'm at the command prompt.  I.e. it will be waiting for input.
> Each time I move the corner of a tty window it displays the size
> at the point of the cursor (where the command prompt is).
>
> This allows me to dynamically see what I'm doing when I resize a window
> if I am going for a specific size (which happens usually after I've
> expanded it for some reason, and am now trying to size it back down


I don't really understand what you are saying, but if you use this trap in
a script it will work as before, if you use it in your interactive shell
and your shell is at the prompt, it will  on only be called if you press a
key.


Re: winch trap delayed until keypress

2014-05-22 Thread Pierre Gaston
On Thu, May 22, 2014 at 8:16 AM, Linda Walsh  wrote:

>
>
> Chet Ramey wrote:
>
>> On 5/20/14, 8:28 AM, Egmont Koblinger wrote:
>>
>>> Hi,
>>>
>>> Execute this in an interactive bash and then resize the window:
>>> trap 'stty size' winch
>>>
>>> In bash-4.2, the trap was executed immediately upon resize.  In
>>> bash-4.3, it is delayed until the next keypress.
>>>
>>
>> http://lists.gnu.org/archive/html/bug-readline/2014-05/msg5.html
>>
> ---
> And if the window is just displaying something -- running a shell script,
> when will it get the resize event so it can update it's display?
>
> I.e. a window resize can easily happen when no key press is in sight,
> Seems like deferring it and at least calling it off a timer.
>
> But it sounds like my winch handler that would tell me the size of
> a resized screen will now be broken.
>
> The whole point was to resize and it would update and tell you the size
> when you finished moving.  If you have to wait to type keys each time, that
> sorta defeats the point.
>
> The previous version said this (checkwinsize):
>   If set, bash checks the window size after  each  command
>   and,  if necessary, updates the values of LINES and COLUMNS.
>
> Wasn't checking the window size after each command safe?
>
> It may not be the same as a key press, but it is alot more likely to
> occur -- running a script or running interactively.
>
>
> As I understand it, this is now broken in 4.3?:
> 
> # display new size of terminal when resized
> function showsize () {\
>   local s=$(stty size); local o="(${s% *}x${s#* })"; s="${#o}";\
>   echo -n $o; while ((s-- > 0));do echo -ne "\b"; done; \
> }
>
> trap showsize SIGWINCH
> 
>

As I understand it, it only affects the interactive behaviour of bash, and
doesn't change anything for your use case, the trap will be executed when
the foreground command exits.


couple of bugs

2014-04-29 Thread Pierre Gaston
A couple of 4.3 bugs have surfaced on IRC, I'm not sure they are reported
here, so just in case here they are

1) bash gets stuck

shopt -s extglob
echo !(*/) # never returns, cannot be interrupted

2) $0 is not always expanded:

echo "without \$1 ${@:0:1}";set -- one;echo "with \$1 ${@:0:1}"
without $1
with $1 ./bash

3)  bash finds a process substitution inside (( ))) somehow

$ for ((i=0; i<($(echo 2)+3);i++));do echo $i;done
bash: syntax error near unexpected token `newline'


Re: Command name dequote does not work

2014-04-15 Thread Pierre Gaston
On Tue, Apr 15, 2014 at 10:32 AM,  wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu'
> -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL
> -DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include -I../bash/lib
>  -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector --param=ssp-buffer-size=4
> -Wformat -Werror=format-security -Wall
> uname output: Linux k210app1 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64
> GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
>
> Bash Version: 4.2
> Patch Level: 37
> Release Status: release
>
> Description:
> I wrote a simple shell script using sed to convert \n into
> newlines and other \\(.) to \\\1 and called that script dequote.
> When I'm trying to call it nothing happens, when I'm calling
> $HOME/bin/dequote it works. There's no alias called dequote, there's no
> other dequote in path.
> When I rename the script and call it it just works as expected.
>
> This looks like a secret command, that just echos its arguments, as
> calling "dequote xyz" echoes "xyz".
>
> But having a secret alias or command is very evil. What's up here?
>
>
> Repeat-By:
> see above
>
> Fix:
> DonÃ't use secret commands. Any builtin has to be documented on
> the help page.
>

Please paste the output of:

type dequote

I can't reproduce your issue.


Re: jobs -p falsely reports the last background pid

2014-04-09 Thread Pierre Gaston
On Wed, Apr 9, 2014 at 3:28 PM, Greg Wooledge  wrote:

> On Wed, Apr 09, 2014 at 02:16:22PM +0200, Håkon Bugge wrote:
> > That is not the issue. Try it out.
>
> Very well.  I can confirm that this script does not terminate on HP-UX
> 10.20 under bash 4.3.8:
>
> #!/bin/bash
> set -m
> for x in 1 2 3 4 5; do sleep 1 & done
> while jobs=$(jobs -p)
>   echo "jobs left: <$jobs>"
>   [[ $jobs != "" ]]
> do
> sleep 1
> done
>
> As a workaround, you could consider using "wait" instead of this polling
> loop to detect the termination of child processes.  This script, for
> example, properly waits and terminates:
>
> #!/bin/bash
> set -m
> for x in 1 2 3 4 5; do sleep 1 & done
> wait
>

The rationale of sus seems to say that it should also work without set -m

"The jobs utility is not dependent on the job control option, as are the
seemingly related bg and fg utilities because jobs is useful for examining
background jobs, regardless of the condition of job control. When the user
has invoked a set +m command and job control has been turned off, jobs can
still be used to examine the background jobs associated with that current
session. Similarly, kill can then be used to kill background jobs with kill
%. "


Re: jobs -p falsely reports the last background pid

2014-04-09 Thread Pierre Gaston
On Wed, Apr 9, 2014 at 3:16 PM, Håkon Bugge  wrote:

>
> On 9. apr. 2014, at 14.04, Greg Wooledge wrote:
>
> > On Wed, Apr 09, 2014 at 12:43:40PM +0200, Håkon Bugge wrote:
> >> This script never terminates:
> >> --
> >> #!/bin/bash
> >>
> >> for P in `seq 5`; do
> >>sleep 1&
> >> done
> >>
> >> while true; do
> >>usleep 2
> >>set foo `jobs -p`
> >>LEFT=$#
> >>LEFT=$[LEFT-1]
> >>echo $LEFT jobs left
> >>if [ x$LEFT = x0 ]; then
> >>  break
> >>fi
> >> done
> >
> >>   Pasting the same commands in an interactive shell, its works.
> >
> > Interactive shells enable job control (monitor mode), whereas
> > noninteractive shells (scripts) do not.  If you want to use job control
> > commands (like "jobs") within a script, you must enable monitor mode
> > (set -m, or set -o monitor, or #!/bin/bash -m).
>
> That is not the issue. Try it out.
>
>
> Håkon
>

I think greg hints that what "jobs" should do without set -m is not defined
...I wonder if it's really the case

However enabling set -m doesn't help

I agree that your script triggers a strange behavior:

Funnily :
bash -c 'sleep 1& sleep 1& sleep 1& sleep 3; jobs -p' # doesn't print
anything

while if you add just "jobs -p" in your loop without capturing it always
print all the pids


Re: /dev/fd/62: No such file or directory

2014-04-05 Thread Pierre Gaston
On Sat, Apr 5, 2014 at 1:46 PM, Linda Walsh  wrote:

>
>
> Chris Down wrote:
>
>> Linda Walsh writes:
>>
>>> So all I need do is test the first entry:
>>>
>>>local -a entries=("$1"/*)
>>>[[ ${entries[0]} == $1/* ]] && return 0
>>>
>>> --- the $1 doesn't need quotes in [[]] and '*' won't expand or
>>> am missing something?  Thanks for the tip Pierre, I often
>>> don't see forests because of all the trees...
>>>
>>
>> The RHS of [[ has pattern matching applied on unquoted parts, so yes,
>> you probably want quotes around $1.
>>
> 
> Pattern matching?   Why doesn't '*' match anything then?
>
> Do you mean pathname expansion?  or are you thinking of the =~
> operator?  I.e. where would it match a pattern?  It can't match
> in the current directory, since it just failed a pathname
> expansion in the line before.  But for a regex, I thought you
> needed '=~' ??
>
> * matches everything and nothing, so you need to quote the whole thing

$ [[ foo == * ]] && echo true
true

your test will also fail if there is one file named "*"  it' better to just
[[ -e ${entries[0]} ]]


Re: /dev/fd/62: No such file or directory

2014-04-01 Thread Pierre Gaston
On Wed, Apr 2, 2014 at 6:04 AM, Linda Walsh  wrote:

>
>
> Greg Wooledge wrote:
>
>> On Fri, Mar 28, 2014 at 06:14:27PM -0700, Linda Walsh wrote:
>>
>>> Does read varname <<<$(...) use process substitution?
>>>
>>
>> I wouldn't dare write it like that, because who knows how the parser
>> will treat it.  I'd write it this way:
>>
>> read varname <<< "$(...)"
>>
>> This is a command substitution and a here-string.  Here-strings are
>> implemented with a temporary file, I believe.
>>
> 
> Well don't know if it circumvents the /fd/62 prob
> yet (got a few places more to check & convert),
> but this seems to work for checking if a file
> or dir is empty:
>
> function empty {
>   [[ $# -lt 1 ]] && return -1
>   [[ -f $1 && ! -s $1 ]] && return 0
>   [[ -d $1 ]] && {
> readarray entries<<<"$(cd "$1" && printf "%s\n" * 2>/dev/null)"
> ((${#entries[@]} < 3)) && return 0
>   }
>   return 1
> }
>
> Had one with find+wc, but this one doesn't rely on any
> sub-utils.
>
>
> why not simply: entries=("$1"/*) ?


Re: easier construction of arrays

2014-03-27 Thread Pierre Gaston
On Thu, Mar 27, 2014 at 5:53 PM, Mike Frysinger  wrote:

> On Thu 27 Mar 2014 08:01:45 Greg Wooledge wrote:
> > files=()
> > while IFS= read -r -d '' file; do
> >   files+=("$file")
> > done < <(find . -iname '*.mp3' ! -iname '*abba*' -print0)
>
> i've seen this construct duplicated so many times :(.  i wish we had a
> native
> option for it.  maybe something like:
> read -A files -r -d '' < <(find . -iname '*.mp3' -print0)
>
> perhaps there is another shell out there that implements something that can
> replace that loop that bash can crib ?
> -mike


An option to change the delimiter for readarray/mapfile?


Re: Special built-ins not persisting assignments

2014-03-24 Thread Pierre Gaston
On Tue, Mar 25, 2014 at 2:39 AM, Pollock, Wayne  wrote:

> $ echo $BASH_VERSION
> 4.2.45(1)-release
>
> $ unset foo
>
> $ foo=bar :
>
> $ echo $foo
>
>
> $
>
> ===
>
> According to POSIX/SUS issue 7, assignments for special builtins
> should persist.  So the output should be ``bar''.
>
> Is there a setting I should turn off (or need to enable), to
> make this work correctly?
>
> I was able to confirm this bug for version 4.2.37(1)-release as
> well.  (zsh 4.3.17 (i386-redhat-linux-gnu) has the same bug.)
>
> --
> Wayne Pollock

It works when bash runs in posix mode, eg:

$ POSIXLY_CORRECT=1 bash -c 'foo=bar : ;echo $foo'
bar
$ bash --posix -c 'foo=bar : ;echo $foo'
bar


Re: Top does not handle more than 100 cores

2014-03-24 Thread Pierre Gaston
On Mon, Mar 24, 2014 at 12:08 PM, Alexandre De Champeaux
wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64'
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-redhat-linux-gnu'
> -DCONF_VENDOR='redhat' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash'
> -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib  -D_GNU_SOURCE
> -DRECYCLES_PIDS  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
> -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fwrapv
> uname output: Linux isv 2.6.32-279.el6.x86_64 #1 SMP Wed Jun 13 18:24:36
> EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-redhat-linux-gnu
>
> Bash Version: 4.1
> Patch Level: 2
> Release Status: release
>
> Description:
> The top command does not handle more than 100 cores. I also
> attached two screenshots of the issue, one showing the output of the  top
> command, and the other of a custom script showing the CPU usage per numa
> node (the server has 160 cores).
> Here is the ouptu of a top -v :
> [root@isv ~]# top -v
> top: procps version 3.2.8
>
> I did not try to reproduce on the latest version of top.
>
>
> Repeat-By:
> Find a pretty cool machine with quite a bunch of cores, and run an
> highly parallel program :).
>
> --
> Alexandre
>

However, top is an external command and has nothing to do with bash, so you
need to report this to its authors, or to the package maintainer(s) of your
distribution.


Re: Please accept M-p as well as C-p

2014-02-13 Thread Pierre Gaston
On Thu, Feb 13, 2014 at 1:35 PM, Ed Avis  wrote:

> Bash accepts the Emacs keybinding C-p to go back in the history, and C-n
> to go forward.
> But most of the time in Emacs (when using its minibuffer) the keys you use
> are Meta-p
> and Meta-n, or on a modern PC keyboard Alt-p and Alt-n.
>
> Currently entering M-p at the bash prompt gives some control characters.
> It could more usefully go back in the history instead.  Then if you flip
> between GNU
> emacs and GNU bash you wouldn't keep typing the wrong thing.
>
> --
> Ed Avis 
>
>
> __
> This email has been scanned by the Symantec Email Security.cloud service.
> For more information please visit http://www.symanteccloud.com
> __
>
>

You can add in your ~/.inputrc

"\ep": previous-history
"\en": next-history


Re: Segmentation fault when -x is added and variable contains nulls

2014-02-06 Thread Pierre Gaston
On Thu, Feb 6, 2014 at 4:07 PM, Pierre Gaston wrote:

>
>
>
> On Thu, Feb 6, 2014 at 3:38 PM, Chet Ramey  wrote:
>
>> On 2/5/14 10:51 PM, Dan Jacobson wrote:
>> > # su - nobody
>> > No directory, logging in with HOME=/
>> > $ cat /tmp/r
>> > LC_CTYPE=zh_TW.UTF-8 N=$(echo 統一|iconv -t big5 -f utf-8) sh -xc ': $N'
>> > $ sh /tmp/r
>> > /tmp/r: line 1:  4551 Segmentation fault  LC_CTYPE=zh_TW.UTF-8
>> N=$(echo 統一|iconv -t big5 -f utf-8) sh -xc ': $N'
>> >
>> > Something about that embedded null.
>> > bash, version 4.3.0(1)-rc1 (i486-pc-linux-gnu)
>>
>> Probably.  How about a stack traceback from gdb?
>>
>> --
>> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>>  ``Ars longa, vita brevis'' - Hippocrates
>> Chet Ramey, ITS, CWRUc...@case.edu
>> http://cnswww.cns.cwru.edu/~chet/
>>
>>
>
> With bash 3.2.25(1)-release
>
> $ LC_CTYPE=zh_TW.UTF-8 N=$(echo  統一|iconv -t big5 -f utf-8) sh -xc ': $N'
> + : $'\262\316\244@'
>
>  With bash-rc1 I can reproduce it with: bash -xc $': \262\316\244@'
>
>
> Core was generated by `./bash -xc : $N'.
> Program terminated with signal 11, Segmentation fault.
> #0  ansic_quote (str=, flags=,
> rlen=0x0) at strtrans.c:282
> 282   *r++ = c;
> (gdb) bt
> #0  ansic_quote (str=, flags=,
> rlen=0x0) at strtrans.c:282
> #1  0x004303af in xtrace_print_word_list (list=0xa175ce8,
> xtflags=) at print_cmd.c:543
> #2  0x00436a0b in execute_simple_command
> (simple_command=0xa1750c8, pipe_in=-1, pipe_out=-1, async=0,
> fds_to_close=0xa175128) at execute_cmd.c:4008
> #3  0x004342d5 in execute_command_internal (command=0xa175088,
> asynchronous=0, pipe_in=-1, pipe_out=-1, fds_to_close=0xa175128) at
> execute_cmd.c:784
> #4  0x00475dd2 in parse_and_execute (string=,
> from_file=0x4b5d58 "-c", flags=) at evalstring.c:359
> #5  0x0041ec14 in run_one_command (command=0x7fffbdc94b0b ": $N")
> at shell.c:1339
> #6  0x0041fcaf in main (argc=,
> argv=0x7fffbdc928c8, env=0x7fffbdc928e8) at shell.c:694
> (gdb) q
>


Sorry, I should have added my locale,en_US.UTF-8, it seems it's a problem
with handling broken UTF-8 as it doesn't happen with:
LC_ALL=C ./bash  -xc $': \262\316\244@'


Re: Segmentation fault when -x is added and variable contains nulls

2014-02-06 Thread Pierre Gaston
On Thu, Feb 6, 2014 at 3:38 PM, Chet Ramey  wrote:

> On 2/5/14 10:51 PM, Dan Jacobson wrote:
> > # su - nobody
> > No directory, logging in with HOME=/
> > $ cat /tmp/r
> > LC_CTYPE=zh_TW.UTF-8 N=$(echo 統一|iconv -t big5 -f utf-8) sh -xc ': $N'
> > $ sh /tmp/r
> > /tmp/r: line 1:  4551 Segmentation fault  LC_CTYPE=zh_TW.UTF-8
> N=$(echo 統一|iconv -t big5 -f utf-8) sh -xc ': $N'
> >
> > Something about that embedded null.
> > bash, version 4.3.0(1)-rc1 (i486-pc-linux-gnu)
>
> Probably.  How about a stack traceback from gdb?
>
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
>  ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, ITS, CWRUc...@case.edu
> http://cnswww.cns.cwru.edu/~chet/
>
>

With bash 3.2.25(1)-release

$ LC_CTYPE=zh_TW.UTF-8 N=$(echo  統一|iconv -t big5 -f utf-8) sh -xc ': $N'
+ : $'\262\316\244@'

With bash-rc1 I can reproduce it with: bash -xc $': \262\316\244@'


Core was generated by `./bash -xc : $N'.
Program terminated with signal 11, Segmentation fault.
#0  ansic_quote (str=, flags=,
rlen=0x0) at strtrans.c:282
282   *r++ = c;
(gdb) bt
#0  ansic_quote (str=, flags=,
rlen=0x0) at strtrans.c:282
#1  0x004303af in xtrace_print_word_list (list=0xa175ce8,
xtflags=) at print_cmd.c:543
#2  0x00436a0b in execute_simple_command (simple_command=0xa1750c8,
pipe_in=-1, pipe_out=-1, async=0, fds_to_close=0xa175128) at
execute_cmd.c:4008
#3  0x004342d5 in execute_command_internal (command=0xa175088,
asynchronous=0, pipe_in=-1, pipe_out=-1, fds_to_close=0xa175128) at
execute_cmd.c:784
#4  0x00475dd2 in parse_and_execute (string=,
from_file=0x4b5d58 "-c", flags=) at evalstring.c:359
#5  0x0041ec14 in run_one_command (command=0x7fffbdc94b0b ": $N")
at shell.c:1339
#6  0x0041fcaf in main (argc=,
argv=0x7fffbdc928c8, env=0x7fffbdc928e8) at shell.c:694
(gdb) q


Re: let's establish BASH_MINIMUM_TIME_BETWEEN_INTERACTIVE_COMMAND

2014-01-30 Thread Pierre Gaston
On Thu, Jan 30, 2014 at 12:56 PM, Dan Jacobson  wrote:

> >>>>> "PG" == Pierre Gaston  writes:
> PG> Maybe try something like: PROMPT_COMMAND='read -t0 && sleep 10'
>
> But how will that on its own stop me from dumping tons of lines of junk
> into bash via one accidental mouse click?
>

Well if the sleep trick works, I would hope that read -t0 would be true if
there is still something in the buffer of the terminal and call  sleep.
Of course I don't know how all this works precisely so maybe you need > 4k
of data hence the "maybe"


Re: let's establish BASH_MINIMUM_TIME_BETWEEN_INTERACTIVE_COMMAND

2014-01-30 Thread Pierre Gaston
On Thu, Jan 30, 2014 at 12:37 PM, Dan Jacobson  wrote:

> Thanks fellows but now bash has become very slow to the touch that way.
>

Maybe try something like: PROMPT_COMMAND='read -t0 && sleep 10'


  1   2   3   4   >