Re: Only one Friday 13th coming in 2016

2015-12-22 Thread Ken Irving
On Tue, Dec 22, 2015 at 04:04:16AM +0100, Ángel González wrote:
> Bill Duncan wrote:
> > Remember that while there are 14 patterns of years, leap years don't
> > impact Friday the 13th for January/February..
> > 
> > This isn't an exhaustive analysis, but a quick check for 300 years
> > didn't show any years without a Friday 13th..
> > 
> > ;-)
> > 
> > $ for y in {1900..2199} ; do for m in {1..12};do cal $m $y|awk
> > 'FNR==1{m=$0}/^ 1/{print m}';done;done | awk '{y[$2]++} END {for
> > (i=1900;i<2200;i++) if (!(i in y)) print i}'
> > $
> 
> 
> Aren't you making things more complex than needed, with so much pipes
> and awk?
> 
> date(1) is your friend:
> 
> For instance:
>  $ for y in {1900..2199} ; do echo -n "$y "; for m in {1..12}; do date +%A -d 
> $y-$m-13; done | grep -c Friday ; done
> 
> shows there are between 1 and 3 Fridays per year.
> 
> 
> Or a mere listing:
>  $ for y in {1900..2199} ; do for m in {1..12}; do date +%A -d $y-$m-13; 
> done; done | sort | uniq -c | sort -rn
> 
> That the most common weekday in these three centuries for the 13th is??? you 
> guessed it, Friday.

Can't resist... cal(1)'s ncal option/version puts all Fridays on a line, so...

$ for y in {1900..2199}; do ncal $y | grep ^Fr | tr \  \\n |
grep 13 | wc -l; done | sort | uniq -c
128 1
128 2
 44 3

and using the full range of cal(1) years:

$ time for y in {1..}; do ncal $y | grep ^Fr | tr \  \\n |
grep 13 | wc -l; done | sort | uniq -c
   4274 1
   4258 2
   1467 3

real0m52.301s
user0m33.116s
sys 0m11.816s

and one more pass to count 'Friday the 13th' per month, but I guess
there can only be 0 or 1 anyway, so probably not very interesting:

$ time for m in {1..12}; do echo m=$m; for ((y=1; y<+1; y+=1)); \
do ncal $m $y| grep ^Fr | tr \  \\n | grep 13 | wc -l; done |
sort | uniq -c; done 
m=1
   8552 0
   1447 1
m=2
   8574 0
   1425 1
m=3
   8552 0
   1447 1
...
m=11
   8553 0
   1446 1
m=12
   8573 0
   1426 1

real10m25.149s
user6m57.916s
sys 2m4.284s

I cheated and edited and filtered the above output to show counts by
month:

1403 8
1405 10
1425 2
1425 6
1426 12
1426 9
1446 11
1447 1
1447 3
1447 4
1447 5
1447 7

For some reason August and October have the fewest Friday the 13th's.




Re: Design question(s), re: why use of tmp-files or named-pipes(/dev/fd/N) instead of plain pipes?

2015-10-19 Thread Ken Irving
On Mon, Oct 19, 2015 at 12:49:25PM -0700, Linda Walsh wrote:
... 
> I observe similar inconsistencies and confusion around
> the use of 'dynamic vars' -- not really being global, not really
> being local, but supposedly on some dynamic frame, "somewhere",
> but not anywhere they've been declared or used.

Understanding dynamic (as opposed to static) variables is pretty key to
understanding how bash works, but the manual doesn't seem to even address
the issue, other than perhaps this line in the section about 'local':

When local is used within a function, it causes the variable name
to have a visible scope restricted to that function and its children.

The children referred to are functions called from the function, but which
are otherwise in the global namespace, so that a child of one function
might be separately a child of a different function.  I kind of like
the way it works, but it's probably confusing if one is more used to
(or thinking of) static variable scoping, which is much more common in
many languages.

I find it convenient to declare most variables using local, to the extent
of putting the main code in an explicit (and somewhat redundant) main()
function, called by main "$@" at the end of the script.

Here's an example utility function (to split stuff into an array) that
relies on the calling functions to declare the array variable V:

split() { local IFS=$1; shift; V=($*); } # SEP ARGS...

foo() {
local -a V
split , "some,comma,separated,words"
printf "%s\n" "${V[@]}"
...
}

bar() {
local -a V
split ' ' "some space separated words"
...
}

baz() {
split : "some:colon:separated:words"
...
}

The V seen by split() (but alas, not declared within it) is a distinct
variable for foo() and bar().  baz() would cause a global V to be
defined implicitly (from within split()), which I'd tend to avoid.

Ken




Re: Design question(s), re: why use of tmp-files or named-pipes(/dev/fd/N) instead of plain pipes?

2015-10-19 Thread Ken Irving
On Sun, Oct 18, 2015 at 07:36:49PM -0400, Chet Ramey wrote:
> On 10/17/15 8:43 PM, Linda Walsh wrote:
> > 
> > Chet Ramey wrote:
...
> >> I think you're missing that process substitution is a word expansion
> >> that is defined to expand to a filename.  When it uses /dev/fd, it
> >> uses pipes and exposes that pipe to the process as a filename in
> >> /dev/fd.  Named pipes are an alternative for systems that don't support
> >> /dev/fd.
> > -
> > ??? I've never seen a usage where it expands to a filename and
> > is treated as such.
> 
> Every example of process substitution ever given expands to a filename,
> and the result is treated as a filename.

The manpage section on process substitution could perhaps present the
concept more clearly by starting with something like the sentence just
above, e.g., very roughly:

Process Substitution, taking the form of <(list) or >(list),
expands the process list to a filename, allowing the construct to
be used in place of a filename for output or input to a command.
It is supported on systems that support named pipes (FIFOs) or the
/dev/fd method of naming open files. ...

The section goes right into what seems like implementation details, and
the use of it is only mentioned in the fourth sentence or so.

Ken




Re: Does [ -f FILE ] have a bug on testing a symlink ?

2015-02-09 Thread Ken Irving
On Mon, Feb 09, 2015 at 09:00:12PM +, Cheng Rk wrote:
> 
> To bug-bash@gnu.org:
> 
> According this documentation `help test`, I am expecting it should
> return false on anything other than a regular file,
> 
> -f FILETrue if file exists and is a regular file.
> 
> but why it returned true on a symlink to a regular file?
> 
> $ [ -f tmp/sym-link ] && echo true
> true

Symlinks are transparent for most purposes, and in your case the test
is against the file pointed to by the symlink.  If you want to test the
symlink itself you can use the -h or -L test operators.



Re: Feature request - ganged file test switches

2014-08-13 Thread Ken Irving
On Wed, Aug 13, 2014 at 12:16:52PM -0400, Steve Simmons wrote:
> 
> On Aug 12, 2014, at 4:36 PM, Chet Ramey  wrote:
> 
> > On 8/9/14, 7:07 AM, Steve Simmons wrote:
> > 
> >> It would be nice to have ganged file test switches. As an example, to test 
> >> that a directory exists and is properly accessible one could do
> >> 
> >>  if [[ -d foo ]] && [[ -r foo ]] && [[ -x foo ]] ; then . . .
> >> 
> >> but
> >> 
> >>  if [[ -drx foo ]] ; then . . .
> >> 
> >> is a lot easier.
> > 
> > Sure, but it's only syntactic sugar.
> 
> Knew that going in :-). Other discussion points out how limited it is;
> I'm perfectly happy pulling back. My thoughts on how to do this more
> flexibly boil down to the capabilities gnu find has w/r/t file types
> and modes. Unfortunately we have a few systems which lack gnu find
> and are "vendor supported appliances" (eyeroll) and we're unable to
> add new software beyond simple scripts. Which also means that any new
> bash feature would probably be unavailable for years, so it's not like
> this is a big loss.
>
> If others have no interest in this syntactic sugar I see little point
> to adding it; a broader and more flexible solution is just to use find
> as above.

I like the idea, but switch negation would need to be supported, and
I don't think that's been covered sufficiently.  Using ! as a switch
modifier might be possible, and I like it, but would then also apply to
single filetest switches, e.g., -!e foo would be the same as ! -e foo.
Maybe that's possible, but it seems a fairly major addition to the syntax.

Using caps for switch negation is pretty common, but the filetest switches
already use some uppercase values, so I don't think that's possible in
this case.

I'm a little confused about the 'before' example:

if [[ -d foo ]] && [[ -r foo ]] && [[ -x foo ]] ; then . . .

I thought that && could be used reliably within the [[ ]] construct,
including short-circuiting the tests, so this could be:

if [[ -d foo && -r foo && -x foo ]] ; then . . .

I don't see how the bundled switces could be ambiguous, so must be 
missing something.

Ken



feature request: apply parameter expansions to array keys

2013-12-15 Thread Ken Irving
No bug here, but I naively expected the pattern substitution expansion
to work on array keys as well as values, e.g.:

$ declare -A h
$ h[foo]=x h[bar]=y
$ # show keys and values:
$ printf "\t<%s>\n" "${!h[@]}" "${h[@]}"




$ # try to pad keys and values:
$ printf "\t<%s>\n" "${!h[@]/#/}" "${h[@]/#/  }"
<>
<  y>
<  x>

I wanted to print out array keys with some padding, easily done in a loop,
but I wanted the padding applied to the separate strings generated using
the quoted [@] expansion.

The manpage documents the ${!name[@]} 'List of arrays keys' expansion
separately from the pattern substitution expansion, so it's all working
as documented, but I think the syntax could allow this.

I'm not running the latest bash: 

$ bash --version
GNU bash, version 4.2.37(1)-release (i486-pc-linux-gnu)

Ken



Re: feature request: file_not_found_handle()

2013-08-21 Thread Ken Irving
On Wed, Aug 21, 2013 at 12:21:11PM -0700, Eduardo A. Bustamante López wrote:
> On Wed, Aug 21, 2013 at 08:39:53PM +0200, Andreas Gregor Frank wrote:
> > Hello Greg,
> > 
> > this is a feature request for no_such_file_or_directory_
> > handle(). I do not want to talk about missing quotes in anyone's code
> > example!
> 
> You are free to send patches with the proposed feature. That way we
> would be able to test it, and see if it doesn't conflict with
> standards or existing codebases.
>
> There doesn't seem to be much demand for this feature, aside from you
> two. Therefore, I don't think it's worth the time of Chet to add it,
> when he has clearly better things to do with his time (fixing real
> bugs for example). Sure, a lot of things could be added to bash,
> ...
> So, I repeat. If you're in need of the feature, implement it.
> Otherwise, you're asking too much IMHO.

I did send such a patch several years ago, and haven't brought the issue
up since then.  The OP's suggestion/request was somewhat different than
what I had come up with, and is probably a better idea anyway, so yes,
I probably should submit a patch for this feature.

Thanks,

Ken



Re: feature request: file_not_found_handle()

2013-08-21 Thread Ken Irving
On Wed, Aug 21, 2013 at 08:10:50AM -0400, Greg Wooledge wrote:
> On Wed, Aug 21, 2013 at 02:22:24AM -0800, Ken Irving wrote:
> > $ cat $(ambler.method dispatch)
> > #!/bin/bash
> > method=$1 && shift
> > test -n "$method" || exit
> > for s in $(ls|shuf); do
> > tob $s.$method "$@" &
> > done
> 
> As far as I can tell, this is some incredibly stupid crap thrown together
> by an "object oriented" junkie to try to make one language look like some
> other language.  That is ALWAYS a bad idea.

The point is not to 'make' bash look like anything else, it is rather to use
the filesystem and bash as tools to do things.  It's an experimental system,
one that interests me, seems to work, and I don't think it necessary does
harm to those that don't use it.

> If you want to do a "method" to an "object", bash already provides a
> syntax for that:
> 
>   method object
> 
> Not:
> 
>   object.method
> 
> The latter is ass-backwards.  It's simply ludicrous.  Stop it.
 
I've found that it can work and can be useful, but am not surprised
at the vitriol you express.  Is that sufficient reason to not even
consider providing a hook to handle forms like dir/object.method?
That is the issue.

> Now, look at this crap:
> 
> > for s in $(ls|shuf); do
> 
> Do you know how hard we work every day to try to stamp out these sorts
> of bugs?  This is so bad I'm laugh/crying right now.

Yes I do have some idea.  I read the bash-bug list regularly, and many
times (even if you can't tell) refer to the wiki you host, read the FAQs,
RTFMs, try to absorb what wisdom I can, and in general try to improve on
the bash coding I do use.  But this was a simple case where there are no
spaces in the names, I wanted to handle the contents of the directory in
more or less random order, and this was a simple if stupid way to do that.
 
> touch 'this is a filename with spaces'

$ ambler.touch 'this is a filename with spaces'

$ ambler.ls -l
total 0
lrwxrwxrwx 1 site site  8 Oct 13  2012 dam5 -> ../wild/
lrwxrwxrwx 1 site site 17 Oct 13  2012 dam6 -> ../upper-iniakuk/
lrwxrwxrwx 1 site site 14 Oct 13  2012 dam7 -> ../upper-reed/
lrwxrwxrwx 1 site site 19 Jul 31 12:12 dam8 -> ../upper-kogoluktuk
lrwxrwxrwx 1 site site 11 Oct 13  2012 das1 -> ../bettles/
lrwxrwxrwx 1 site site 10 Oct 13  2012 das2 -> ../alatna/
lrwxrwxrwx 1 site site  9 Oct 13  2012 das3 -> ../sfork/
lrwxrwxrwx 1 site site  8 Oct 13  2012 das4 -> ../reed/
-rw-r--r-- 1 site site  0 Aug 21 10:59 this is a filename with spaces

$ ambler.dispatch types
CR1000-Data-Archive:Directory:Object
CR1000-Data-Archive:Directory:Object
CR1000-Data-Archive:Directory:Object
CR1000-Data-Archive:Directory:Object
CR1000-Data-Archive:Directory:Object
CR1000-Data-Archive:Directory:Object
CR1000-Data-Archive:Directory:Object
CR1000-Data-Archive:Directory:Object
chained method a did not resolve
chained method is did not resolve
chained method this did not resolve
chained method with did not resolve
chained method filename did not resolve
chained method spaces did not resolve

$ ambler.rm 'this is a filename with spaces'

There is no reason that names with spaces could not be used, I just
have no interest in using them myself, and have not bothered with
supporting them.  If the system did support them, then it would either
work or maybe would report that 'this is a filename with spaces' does not
have that method.  Obviously the dispatch method would need to be fixed,
but other than just: 'for s in *; do "$s.$method"; done', I'd need to look
up how to shuffle up the names, e.g., probably use a while loop with the
shuffling done in a pipe at the end, or put the names in an array, etc..
That's all doable, but it wasn't necessary to do for this case.

And that is all well beside the point, which is whether a
No_such_file_or_directory_handler hook (or equivalent) might be provided.

Ken




Re: feature request: file_not_found_handle()

2013-08-21 Thread Ken Irving
On Tue, Aug 20, 2013 at 04:44:57PM +0200, Roman Rakus wrote:
> You are badly using features of bash. Write a script which will do
> things for you, or use any other language/shell.

There's only one feature being used, a hook that bash calls in 
the event that a command is not found.  The request is just to
extend that behavior to other cases of the command not being found.
In either case, bash is done with the command anyway, giving up
on it, about to emit an error message and quit.  So what's the 
harm is using the hook to do something else?

> Please, accept this response as a suggestion.
> 
> command_not_found_handle is designed to do other things than you are
> expecting.

The only use I've seen, and I think where it originated, was a way
to suggest packages for installation when a command failed but could
be provided by a package.  Kind of a gimmick, perhaps, but sometimes
probably useful and a neat trick, maybe also potentially annoying.

I don't know about Andreas' application, but mine is largely written
in bash, and is a way to deploy systems using mostly shell scripting or
whatever other programs or languages you might want to use.

Your suggestion was:

>  Write a script which will do
> things for you, or use any other language/shell.

I did write such a script, and can use it directly and have done
so a lot.  But the convenience of calling it automatically is  
significant, and makes for a very natural extension to the 
shell, in my opinion of course.

The 'bad' commands that bash is failing are not likely to be
confused with other commands, as few commands have . in the 
name.  But even if they did, bash would just run them anyway.

Here's a method, the only one in a simple type that is for grouping some
objects together and dispatching another method on them in random order.

$ cat $(ambler.method dispatch)
#!/bin/bash
method=$1 && shift
test -n "$method" || exit
for s in $(ls|shuf); do
tob $s.$method "$@" &
done

Just a bash script, no tricks.  On the command line, with the
command-not-found hook, I'd omit the tob and let it be called
implicitly, for convenience.

I think this use is orthogonal to bash, and not in conflict 
with it.

Ken

> 
> 
> RR
> 
> On 08/19/2013 10:29 PM, Andreas Gregor Frank wrote:
> >Hi Chet,
> >
> >sorry, i thought you talk about the bash code.
> >I didn't want to show my own usecase but now i have to ;-):
> >I have a File class and can construct a File "object" for example:
> >File anObjectName /etc/passwd
> >and then i can do
> >e.g. anObjectName.getInode (this already works with
> >command_not_found_handle() )
> >But if i do a:
> >File /etc/passwd /etc/passwd
> >and then
> >/etc/passwd.getInode (i think it would be nice if the normal files in a
> >filesystem could be treated like objects)
> >then there is nothing that triggers the command_not_found_handle() to split
> >"object" and method...
> >So at the moment slashes are forbidden in object names in my fun project.
> >
> >Now you know why your bash example for ckexec() isn't a solution for me.
> >
> >bye
> >Andreas
> >
> >
> >2013/8/19 Chet Ramey 
> >
> >>On 8/19/13 6:57 AM, Andreas Gregor Frank wrote:
> >>>Hi Chet,
> >>>
> >>>I have no idea if there is "enough" demand, but i think there will be
> >>some
> >>>ideas to use this feature...
> >>>I still think it is a question of consistency to be able to handle a "No
> >>>such file or directory event", if i can do this with a "command not found
> >>>event" (independent of the command_not_found_handle history).
> >>>
> >>>You say you can easily test whether or not if the file in the pathname
> >>exists.
> >>
> >>That is not what I said.  I said that you, the script writer, can check
> >>whether or not a filename containing a slash is executable before
> >>attempting to execute it.  Maybe a function something like this (untested):
> >>
> >>ckexec()
> >>{
> >> case "$1" in
> >> */*);;
> >> *)  "$@"  ; return $? ;;
> >> esac
> >>
> >> if [ -x "$1" ]; then
> >> "$@"
> >> else
> >> other-prog "$@"
> >> fi
> >>}
> >>
> >>
> >>Chet
> >>--
> >>``The lyf so short, the craft so long to lerne.'' - Chaucer
> >>  ``Ars longa, vita brevis'' - Hippocrates
> >>Chet Ramey, ITS, CWRUc...@case.edu
> >>http://cnswww.cns.cwru.edu/~chet/
> >>
> 
> 

-- 
ken.irv...@alaska.edu, 907-474-6152
Water and Environmental Research Center
Institute of Northern Engineering
University of Alaska, Fairbanks



Re: feature request: file_not_found_handle()

2013-08-18 Thread Ken Irving
On Sun, Aug 18, 2013 at 06:30:47PM -0700, Linda Walsh wrote:
> 
> Chet Ramey wrote:
> >On 8/14/13 7:44 AM, Andreas Gregor Frank wrote:
> >>Hi,
> >>
> >>i think a file_not_found_handle() or a modified command_not_found_handle(),
> >>that does not need an unsuccessful PATH search to be triggered, would be
> >>useful and consistent.
> >
> >The original rationale for command_not_found_handle is that there was no
> >other way to determine whether a command could be found with a PATH search.
> >(well, no easy way).
> >
> >A PATH search is suppressed when the command to be executed contains a
> >slash: the presence of a slash indicates an absolute pathname that is
> >directly passed to exec().  Since there's no search done, you know exactly
> >which pathname you're attempting to execute, and you can easily test
> >whether or not it exists and is executable.
> ---
> 
>   How does "lib/font/fontname.ttf" indicate an absolute path?

It's not a question of absolute or relative in that sense, but just
whether it looks like a path of any sort due to the slash.   The context
is for the first entry on a command line, and there's no point in
searching PATH if the value has a slash.  Absolute or relative path makes
no difference.  I suppose you could say that it's abolutely a path...


> This reminds me of a the "dot" (source) command discussion but a bit
> reversed?,  Where it uses the presence of any '/' to force a PATH
> relative search as the first action.
> 
> So if '/' is in a "dot" path, that means it searches path for the file,
> but if it doesn't contain a "/" then it only looks in the CWD??
> 
> What if it starts with a "/"... does that override the PATH search?

It might sound similar but I don't think there's much relation between
this case, where we are dealing with the command to be executed, and the
source command and how it handles its arguments.

We want the attempted command, e.g., lib/font/fontname.ttf, once it fails
bash's attempts to execute it, to be passed to an alternative shell so it
can have a go.  In my system the handler would try to resolve 
lib/font/fontname as an object and ttf as a method, or maybe 
lib/font/fontname.ttf as the object and 'default' as the method.

Ken



Re: feature request: file_not_found_handle()

2013-08-18 Thread Ken Irving
On Sun, Aug 18, 2013 at 02:35:49PM -0400, Chet Ramey wrote:
> On 8/14/13 7:44 AM, Andreas Gregor Frank wrote:
> > Hi,
> > 
> > i think a file_not_found_handle() or a modified command_not_found_handle(),
> > that does not need an unsuccessful PATH search to be triggered, would be
> > useful and consistent.
> 
> The original rationale for command_not_found_handle is that there was no
> other way to determine whether a command could be found with a PATH search.
> (well, no easy way).
> 
> A PATH search is suppressed when the command to be executed contains a
> slash: the presence of a slash indicates an absolute pathname that is
> directly passed to exec().  Since there's no search done, you know exactly
> which pathname you're attempting to execute, and you can easily test
> whether or not it exists and is executable.
> 
> Is there enough demand to make this feature addition worthwhile?

I guess there are two people that would like to see it, so not exactly a
groundswell of demand.  As a user of bash, there's zero cost on my part
for adding the feature, and real value in having it available.

The alternative is, as now, not being able to use tab-completion easily
to reach an object, or having to edit the slashes out of the command,
so really just more typing.

A simple workaround is to tab-complete to cd into an object
(directory), at which point method calls are local, no slashes, and the
command-not-found hook is fired.

Thanks for considering it at least!

Ken




Re: feature request: file_not_found_handle()

2013-08-18 Thread Ken Irving
On Sat, Aug 17, 2013 at 01:46:16PM +0200, Andreas Gregor Frank wrote:
> same reason for me: some object-oriented shell (
> http://oobash.sourceforge.net/)

We're both running commands in the form 'object.method ...', and it works
very naturally on the shell using the command-not-found hook.  The problem
comes when, e.g., the object is incrementally built up using tab completion
so that it contains slashes, and the hook won't fire.

> 
> "But given that the first entry on a command line pretty much has to be
> a command, I'm not sure it makes sense to invoke file_not_found_handle()"
> 
> I think you are right.

On the other hand, there'd be no confusion if the handler was named to
match the error message , so maybe no_such_file_or_directory_handler()
could work.  Still, it might be simpler to fix the current behavior.
  
> But then we go in circles...:
> http://gnu-bash.2382.n7.nabble.com/command-not-found-handle-not-called-if-command-includes-a-slash-tp7118.html
> 
> @all: If there is a reason for not fixing this 'bug', i would like to hear.

I suspect the reason may just be disinterest, and no one has made any
noise about the issue.  Object oriented shell is perhaps an arcane area,
but I find it quite useful, and have several different systems running
using it.

My scheme relies on following search paths to resolve executables,
and could likely largely be implemented by setting PATH dynamically
according to $object, followed by exec $method.

Most object oriented shell schemes that I've found and looked at are based
on using the innards of shell programming, some using functions, some
using arrays, some using variables, namespaces in variable name prefixes,
etc.  Using these systems often requires significant buy-in to the scheme,
e.g., having to follow some discipline to make everything work.

My interest is in trying to minimize the machinery needed to make
it happen, and trying to keep necessary conventions that have to
be followed to a minimum.  That's the goal anyway.  Methods are any
ordinary executables, executed in the context of the object (the 
handler cd's to the object directory).

The current machinery for my scheme is mostly one executable, the handler
or executive, really just yet another command line interpreter or shell.
The main convention is that symlinks named ^ are needed to identify
the type of objects.  The ^ link in a directory object resolves via
the handler to a directory with methods, i.e., a bin or lib directory,
which can also have a ^ link, and so on, effectively building up PATH.

I have to avoid going on and on about this.  I have a github project
for it, but have not updated the documentation for quite a while,

  https://github.com/kirv/thinobject

I'm not sure where an appropriate place to discuss the general topic
would be, but at least the subject topic seems appropriate for bash-bug,
whether as a bug or feature request.

Ken



Re: feature request: file_not_found_handle()

2013-08-14 Thread Ken Irving
On Wed, Aug 14, 2013 at 01:44:08PM +0200, Andreas Gregor Frank wrote:
> Hi,
> 
> i think a file_not_found_handle() or a modified command_not_found_handle(),
> that does not need an unsuccessful PATH search to be triggered, would be
> useful and consistent.
> 
> i found this old (Dec, 2009) discussion :
> http://gnu-bash.2382.n7.nabble.com/command-not-found-handle-not-called-if-command-includes-a-slash-tp7118.html
> 
> Why are the patches not part of the bash?
> 
> Use case:
> -see: command_not_found_handle()

It didn't occur to me to see if there was a file_not_found_handle() hook,
but I'd use it if available.  I use the command_not_found_handle() all the
time in my 'thinobject' object-oriented shell system, but need to avoid
slashes to invoke the hook per the subject 'bug' (still not sure it's really
a bug).

$ declare -f command_not_found_handle
command_not_found_handle () 
{ 
exec /opt/bin/tob "$@"
}

The tob program looks for a special file to infer the type of an object,
and works if either / or . delimits objects, e.g.,

$ tob rooftop.ingest.types
Telemetry-CR1000-Site:Directory:Object

$ tob rooftop/ingest.types
Telemetry-CR1000-Site:Directory:Object

But the handler is not invoked in the latter case.

$ rooftop.ingest.types
Telemetry-CR1000-Site:Directory:Object

$ rooftop/ingest.types
-su: rooftop/ingest.types: No such file or directory

It'd be great to have this fixed, or to have a file_not_found_handle().
But given that the first entry on a command line pretty much has to be
a command, I'm not sure it makes sense to invoke file_not_found_handle()
in this case, or whether it might be confusing as to when it gets invoked.

Ken




Re: eval doesn't close file descriptor?

2013-02-12 Thread Ken Irving
On Tue, Feb 12, 2013 at 04:22:08PM -0500, Matei David wrote:
> Generally speaking, it took me quite some time to figure out how to
> properly create a "double pipe" without using any intermediate files or
> fifos. The concept is very easy: in -> tee -> two parallel, independent
> pipes -> join -> out. A first complication is the limited pipe capacity,
> and the possibility of one side getting stuck if the other stops pulling in
> data. I then wrote a cat-like program which doesn't block on stdout, but
> keeps reading stdin, buffering in as much data as needed. I used it at the
> very end of either pipe. But then I had to create the actual processes, and
> then I stumbled upon all these issues with coproc and file descriptors. You
> leave the wrong one open and the thing gets stuck... I wish there was a
> howto on this subject.

I really like to see these threads that poke at the edges of things
in bash, providing a great learning opportunity for lurkers like me.
But that said, perhaps what you're trying to so might be handled easily
by socat; from http://www.dest-unreach.org/socat/doc/README:

socat is a relay for bidirectional data transfer between two
independent data channels. Each of these data channels may be a file,
pipe, device..., a socket..., a file descriptor (stdin etc.) ...

Just a thought.

Ken



Re: typeset -p on an empty integer variable is an error. (plus -v test w/ array elements)

2013-01-14 Thread Ken Irving
On Mon, Jan 14, 2013 at 08:57:41PM +0100, John Kearney wrote:
> ...
> btw
> || return $?
> 
> isn't actually error checking its error propagation.

Also btw, I think you can omit the $? in this case;  from bash(1):

return [n]
...
If n is omitted, the return status is that of the  last  command
executed  in the function body.  ...

and similarly for exit:

exit [n]
...  If  n  is  omitted,
the exit status is that of the last command executed.  ...

Ken



Re: Syntax Question...

2011-08-18 Thread Ken Irving
On Thu, Aug 18, 2011 at 10:13:45AM -0700, Linda Walsh wrote:
>Um...the description from "<<<",  above indicates it is...subject
> to pathname expansion...If I had matching filesin my dir, it
> expanded and returned them,
> so I'm pretty sure it does PN expansion.

As noted, you're being fooled by not quoting the variable, which is
expanded in the echo and not in the <<< operation.

$ rm *.x
$ touch foo.x bar.x
$ read a <<< *.x
$ echo "$a"
*.x
$ echo $a
bar.x foo.x

Ken




Re: Syntax Question...

2011-08-16 Thread Ken Irving
On Tue, Aug 16, 2011 at 03:41:19PM -0700, Linda Walsh wrote:
> Ken Irving wrote:
> >It seems to me that there are real bugs in applying set -e that can only
> >be fixed by handling more special cases in the bash code,and those cases
> >may vary for different scripts.
> 
> >[snip]
> >set ...
> >-e  The shell does not exit if the command that fails is part of the
> >command list immediately  following  a while or until keyword,
> >part of the test following the if or elif reserved words,
> >part of any command executed in a && or || list except the
> >command following the final && or ||, any command in a pipeline
> >but the  last,  or  if  the  command's return  value  is being
> >inverted with !
> >A constructive contribution might be made by seeing if you can add your
> >special cases to that list, even better if you can help by identifying
> >where it might be done in the code.
> -
> ARRGGG...these were not special cases, they were has bash normally
> worked in in it's non-posix mode SINCE they were created! (3.0 and
> up through 4.0).
> 
> 4.1 breaks the previous standard. -- it's not compatible with
> bash 3.0 or 4.0 in this regard
> 
> Example:  from 2 different linux systems, one w/bash 4.0 and one w/
> the broken 4.1 (which will likely be reverted to 4.0 very soon if 4.1
> doesn't get a fix)..
> 
> Here is a working 'bash (4.0) vs.  current, below:
> 
>Astarte:/tmp> rpm -q bash
>bash-4.0-18.4.1.i586
>Astarte:/tmp> ( set -e ; ((a=0)); echo "are we here?"  )
>are we here?
>Astarte:/tmp>
> 
> Now for the current behavior:
> 
>Ishtar:/tmp> rpm -q bash
>bash-4.1-20.25.1.x86_64
>Ishtar:/tmp> ( set -e ; ((a=0)); echo "are we here?" )
>Ishtar:/tmp>
> 
> 
> You see the difference?   In 4.0, the assignment didn't cause
> an errexit. -- ((a=0)) has NEVER cause an errexit since it

Please; ((a=0)) is documented to return a non-zero value under the
condition that it evaluates to 0.  That's what it's for.  It is *not* an
'errexit' or an error of any sort,  it's just its exit value.  That set
-e trips on that is exactly as it is documented to do, and just like
it would've done under the other conditions mentioned in the manpage if
code hadn't been added to specially handle those cases.

Unless (()) and let show up in the description for the set -e exceptions, 
it seems to me that it's working correctly now, and maybe it wasn't before.

> was introduced in bash 3.x[0?]This is the now broken behavior
> and it was done to make one of bash's extended features compatible
> with POSIX -- which is a very BAD reason, as Bash has a posix mode
> for that -- and that POSIX mode changes shouldn't be
> contaminating the BASH-extended set.

Yes, we've heard all that before, several times. But the facts remain.
let and (()) are commands (with useful side effects, if you use them
correctly), set -e exits on non-zero exit values, etc., etc..

> I'm not asking for something "new", I'm asking for compatibility with
> the versions 3.0 -> 4.0

And I think it's clear that that's probably not going to happen, as the
way it works now is consistent and in accordance with the manual.

> Here's the quote from 4.0 on set -e:

(Also quoted above.)

> Note the above -- says it will exit on a "pipeline".
> as 'let' and '(())' did not take or accept I/O, they wouldn't
> normally be considered part of a pipeline -- and Greg has
> mentioned that ((1)), is a 'complex' command not a simple command
> (but it can't be used as part of a pipeline...)

That's a new argument, I think, and pretty weak.

$ ls | ((a=1)); echo $?
0

Just because a command ignores stdin doesn't mean it can't be used
in a pipeline.

$ bash -c 'set -e; ls | ((a=1)); echo x $?'; echo y $? 
x 0
y 0
$ bash -c 'set -e; ls | ((a=0)); echo x $?'; echo y $?
y 1
$ bash -c 'set -e; ls | ((a=0)) | cat; echo x $?'; echo y $?
x 0
y 0

Here the (()) is not at the end of the pipeline, so set -e is not triggered.

I don't see how this is not the correct behavior.

I suspect the warnings against using set -e are in part resulting from threads
such as this one, where folks get tripped up on some special case.  Maybe it's
possible to add yet another exceptional condition to keep set -e at bay, but 
you might consider trying another approach.  Good luck!

Ken




Re: Syntax Question...

2011-08-16 Thread Ken Irving
On Mon, Aug 15, 2011 at 08:19:01PM -0700, Linda Walsh wrote:
> >>today_snaps="$('ls' -1 ${snap_prefix} 2>/dev/null |tr "\n" " " )"
> >
> >This one is so bad, I saved it for last. Ack! Pt! Wait, what? Why?
> >What the? Huh?
> ...
> What would you do to search for files w/wild cards and return the output
> in a list?

Maybe this?

today_snaps=( ${snap_prefix} )

e.g., given:

$ ls note*
note  note-to-bob  note-to-bob.txt

$ unset foo bar; bar=note\*; foo=($bar); echo ${foo[@]}; echo $foo
note note-to-bob note-to-bob.txt
note

This result is an array; if you really want a string,

$ unset foo bar; bar=note\*; foo="$bar"; echo ${foo[@]}; echo $foo
note note-to-bob note-to-bob.txt
note note-to-bob note-to-bob.txt

>From bash(1):

   Pathname Expansion
   After  word  splitting, unless the -f option has been set, bash
   scans each word for the characters *, ?, and [.  If one of these
   characters appears, then the word is regarded as a pattern,
   and replaced with an alphabetically sorted list of file names
   matching  the  pattern.  ...


I'm guessing the 'Ack!' was maybe for 'useless use of subshell'?


> I'm sorry if you feel you waisted your time,  I usually don't stay stuck
> ...

I'm actually trying to follow along, but it's tough going.

It seems to me that there are real bugs in applying set -e that can only
be fixed by handling more special cases in the bash code, and those cases
may vary for different scripts.  I've never used it, and do use a lot of
'|| exit' constructs, but I think the core of the matter is conflating
'non-zero exit value' with 'error'.  set -e is not defined to trigger on
'errors', but rather on 'non-zero exit values'.  The problem is that,
for set -e to be generally useful, bash has to somehow internally disable
it under some conditions, as described in the manual:

set ...
-e 
...
The shell does not exit if the command that fails is part of the
command list immediately  following  a while or until keyword,
part of the test following the if or elif reserved words,
part of any command executed in a && or || list except the
command following the final && or ||, any command in a pipeline
but the  last,  or  if  the  command's return  value  is being
inverted with !.

A constructive contribution might be made by seeing if you can add your
special cases to that list, even better if you can help by identifying
where it might be done in the code.

Ranting, inferring that people have 'little minds', aren't good
or skilled programmers, etc., does little to make things better.
Excessive verbosity, lingo, abbrevs, etc., don't help either. ;-)

Ken




Re: Problem with open and rm

2011-03-16 Thread Ken Irving
On Wed, Mar 16, 2011 at 10:54:15AM +, Barrie Stott wrote:
> The script that follows is a cut down version of one that came from elsewhere.
> 
> #!/bin/bash
> 
> cp /tmp/x.html /tmp/$$.html
> ls /tmp/$$.html
> [ "$DISPLAY" ] && open /tmp/$$.html
> ls /tmp/$$.html
> rm -f /tmp/$$.html
> 
> I'm on an Imac with OS X 10.6.6. If I run the script as it stands,
> the open tries to open /tmp/$$.html in a new tab in the Safari
> browser and fails with the message: No file exists at the address
> ?/tmp/13551.html?. The terminal output is the following pair of lines,
> which suggest that the file is still around after open failed:
> 
> /tmp/13551.html
> /tmp/13551.html
> 
> If I comment out the final 'rm' line and run it again, I get what
> I want displayed (and a similar pair of lines on the terminal) but
> unfortunately the temporary file is left lying around.
> 
> My two questions are: 
> Why?

Assuming open doesn't somehow block, the script is probably working
and removing the file just after open is invoked.

> How can I change the script so that I can both view the file and have
> it removed?

Adding sleep 30 before rm ought to leave it around for a while.

Ken



Re: Problem with how to assign an array

2011-02-24 Thread Ken Irving
On Thu, Feb 24, 2011 at 03:42:57PM -0500, Greg Wooledge wrote:
> On Thu, Feb 24, 2011 at 02:55:13PM -0500, Steven W. Orr wrote:
> > I have three arrays
> > 
> > a1=(aaa bbb ccc ddd)
> > a2=(qqq www eee rrr)
> > a3=(fff ggg hhh)
> > 
> > I then set a_all
> > 
> > a_all=("${a1[*]}" "${a2[*]}" "${a3[*]}"
> 
> Missing ).  Also, a far more serious problem, you used * when you should
> have used @.
> 
> a_all=("${a1[@]}" "${a2[@]}" "${a3[@]}")
> 
> This is absolutely critical.  Without this, you are no longer maintaining
> the integrity of each element.  In fact, what you've got there will create
> a_all with precisely 3 elements.  Each of these elements will be a
> concatenation of the other arrays' elements with a space (or the first
> char of IFS if you've set that) between them.
> 
> > Later on, I decide which element of a_all I really want. So with my new 
> > index calculated, I say
> > 
> > real_a=( ${a_all[index]} )
> 
> This is also wrong.  This one does word-splitting on the element you're
> indexing, and the resulting set of words becomes a new array.

But if * was used in declaring the a_al elements, this _would_ recover
one of the arrays, e.g., real_a=(${a_all[1]}) should result in the same
array as a2.  (Ignoring the null-element issue, though.)

I had the same thought, use @ instead of *, but maybe the OP wants to 
get the original arrays and not individual elements?

> In fact, I can only guess what you're trying to do here.
> 
> If you want to assign a single element to a scalar variable, you
> should do:  element=${array[index]}
> 
> > And it worked really well until today. The problem is that I need an 
> > element of the aNs to be allowed to be null values.
 
As explained, using * expands '' or ' ' into just word-delimiting whitespace,
so the null elements are lost.

> No problem.
> 
> > Like this:
> > 
> > a1=(aaa bbb ccc ddd '' jjj kkk lll)
> 
> No problem.
> 
> > such that if index is 0, I'd like real_a to end up with 8 elements instead 
> > of 7.
> 
> Huh?  You mean during the concatenation?  (You changed the array name?)
> Do it correctly:
> 
> imadev:~$ unset a1 a2 big; a1=(a b '' c) a2=(d e f '')
> imadev:~$ big=("${a1[@]}" "${a2[@]}")
> imadev:~$ printf "<%s> " "${big[@]}"; echo
>   <> <> 
> 
> > I could create a sentinel, I could use a case statement, I could create all 
> > kinds of confabulations, but I'd like to know if there's a better way to do 
> > it.
> 
> Huh?

If by sentinel you (OP) mean a token standing in for null, I suspect that would
be the simplest approach.

> > I literally tried everything I could think of.

Listen to Greg... 

Ken

> You must learn the difference between "$*" and "$@".  (And the analogous
> treatment of * and @ in an array indexing context.)
> 
> imadev:~$ wrong=("${a1[*]}" "${a2[*]}")
> imadev:~$ printf "<%s> " "${wrong[@]}"; echo
>   
> 
> If you don't use the right syntax, you're going to have problems with
> elements that contain whitespace (or IFS characters) as well as empty
> elements as you already noted.




Re: bash 'let' can give error

2010-12-10 Thread Ken Irving
On Thu, Dec 09, 2010 at 05:52:49PM +, Dominic Raferd wrote:
> $ val=0; let val++; echo $val,$?; unset val
> 1,1
> 
> see the error code 1. Setting any other start value (except
> undefined) for val does not produce this error, the problem occurs
> for let val++ and let val-- if the start value is 0.
> 
> for let ++val and let --val the problem occurs if the result is 0.
> Also for the
> command:
> 
> $ val=10; let val=val+2*2-14; echo $val,$?; unset val
> 
> ...
> Why does this happen? Is it 'by design'? It makes arithmetic with
> bash let very dangerous because it can throw unexpected errors (and
> break scripts running with  set -e).

I don't know why this is done, but the behavior is clearly documented
in the manpage:

   let arg [arg ...]
  Each arg is an arithmetic expression to be evaluated (see ARITH-
  METIC EVALUATION above).  If the last arg evaluates  to  0,  let
  returns 1; 0 is returned otherwise.

Ken



Re: pathname expansion part two

2010-10-15 Thread Ken Irving
On Fri, Oct 15, 2010 at 01:13:33PM -0600, Bob Proulx wrote:
> javajo91 wrote:
> > "For example, if you wanted to list all of the files in the directories /usr
> > and usr2, you could type ls /usr*.
> 
> Because the '*' is a file glob.  It is called a glob because it
> matches a glob of characters.  The process of the expansion is called
> globbing.  "/usr*" matches "/usr" and "/usr2" both.  That is expanded
> on the command line.
> 
>   $ ls /usr*
> 
> is the same as
> 
>   $ ls /usr /usr2
> 
> The ls command never sees a '*' because the shell expands it first.
> You can use echo to see what the shell has expanded.
>
>   $ echo foo /usr*
>   foo /usr /usr2

Note, though, that the '*' will still be there if the glob operation
fails to expand to anything.
 
  $ echo foo /usrz*
  foo /usrz*

I guess this makes sense, since just about all characters can be used in
filenames, but I always need to check for this case, e.g., in for loops.
 
Ken



Re: asking for a better way to implement this

2010-09-26 Thread Ken Irving
On Sun, Sep 26, 2010 at 06:15:57PM +0200, Christopher Roy Bratusek wrote:
> Hi all,
> 
> I'm writing a wrapper for rm, which does not let the file/directory be 
> removed, if
> there's a .dirinfo file in the directory containing "NoDelete".
> 
> (feel free to ask what that's all about.)
> 
> This is what I have:
> 
> xrm () {
> 
>   for path in $@; do
>   if [[ $path == -* || $path == --* ]]; then
>   local RMO+="$path"
>   else
>   basedir=$(echo "$path" | sed -e "s/$(basename path)//g")
>   if [[ -e "$path"/.dirinfo && $(grep NoDelete 
> "$path"/.dirinfo 2>/dev/null) != "" ]]; then
>   echo "can not delete delete $path, it's 
> protected"
>   elif [[ -e ${basedir}/.dirinfo && $(grep NoDelete 
> "$basedir"/.dirinfo 2>/dev/null) != "" ]]; then
>   echo "can not delete delete $path, it's 
> protected"
>   else
>   $(which rm) $RM_OPTS $RMO "$path"
>   fi
>   fi
>   done
> 
> }
> 
> Now, I wanted to ask, if there's a more elegant/better way to implement this.

Style is a matter of taste, but I think this is equivalent (not tested):

xrm () {
for path in "$@"; do
test ${path:0:1} == - && local RMO+="$path " && continue
for try in "$path" "${path%/*}"; do
test -e "$try"/.dirinfo || continue
grep -q NoDelete "$try"/.dirinfo || continue
echo "can not delete $try, it's protected"
continue 2
done
$(which rm) $RM_OPTS $RMO "$path"
done
}

A few points: 

Since you don't quote $@ there's probably no reason to quote $path.

Your RMO will have options concatenated with no space between them.

Your sed 's///g' might misbehave, e.g., xrm /tmp/home/tmp.  The bash % expansion
only operates on the last pattern.

The -e option to sed seems to serve no purpose.

I'm guessing your $(which rm) is intended to avoid calling rm(), but maybe \rm 
would do the same thing?  No, that still calls the function... I'm not sure 
how to do that. 

This is the bug-bash list, maybe not the best place for this kind of thing...

Ken




Re: Bash style of if-then-else?

2010-09-23 Thread Ken Irving
On Thu, Sep 23, 2010 at 04:38:42PM -0500, Michael Witten wrote:
> On Mon, Aug 23, 2010 at 16:40, Eric Blake  wrote:
> > On 08/23/2010 03:29 PM, Peng Yu wrote:
> >>
> >> Hi,
> >>
> >> I'm wondering if there is a widely accepted coding style of bash scripts.
> >>
> >> lug.fh-swf.de/vim/vim-bash/StyleGuideShell.en.pdf
> >>
> >> I've seen the following style. Which is one is more widely accepted?
> >>
> >> if [ -f $file]; then
> >>    do something
> >> fi
> >>
> >> if [ -f $file];
> >> then
> >>    do something
> >> fi
> >
> > Neither.  You're missing adequate quoting in case $file contains spaces.
> >  More importantly, when using '[', the trailing ']' has to be its own
> > argument.  Personally, I tend to use the style with fewer lines:
> >
> > if [ -f "$file" ]; then
> >  do something
> > fi
> 
> This is also possible:
> 
>   [ -f "$file" ] && do_something
> 
> or perhaps:
> 
>   [ -f "$file" ] && {
> do_something_0
> do_something_1
>   }

While we're talking about style...  I prefer using 'test' rather than
'[..]' as it seems to me to be more readable.

test -f "$file" && do_something

test -f "$file" && {
do_something_0
do_something_1
}

if test -f $file; then
do something
else
do something else
fi

Are there actual advantages for [] over test?  I guess the former uses
one less byte than the latter.

Ken



Re: function grammar

2010-07-19 Thread Ken Irving
On Mon, Jul 19, 2010 at 10:46:30AM +0200, Bernd Eggink wrote:
> Am 19.07.2010 08:30, schrieb Ken Irving:
> >On Sun, Jul 18, 2010 at 11:53:02AM -0700, Linda Walsh wrote:
> >>
> >>from man bash, to define a function use;
> >>
> >>"function" "name"
> >>  OR
> >>"name" ()
> >>
> >>right?
> >>
> >>And Compound Commands are:
> >>
> >>  ()
> >>   {; )
> >>  (( expression ))
> >>  [[ expression ]]
> >>...et al
> >>
> >>so why do I get a syntax error for
> >>
> >>function good_dir [[ -n $1&&  -d $1&&  -r $1&&  -x $1 ]]
> >>
> >>bash: syntax error near unexpected token `[['
> >
> >I see this in bash(1):
> >
> > SHELL GRAMMAR
> > ...
> > Shell Function Definitions
> > ...
> > [ function ] name () compound-command [redirection]
> >
> >and do not see the version you show without the parens.
> 
> It's there. Look at the 3rd sentence:
> 
> "If the function reserved word is supplied,  the  parentheses  are
> optional."

So maybe the declaration could be fixed to show that, e.g., as either of:

name () compound-command [redirection]
function name [()] compound-command [redirection]

I can't see how to put that in one construct...

Ken




Re: function grammar

2010-07-18 Thread Ken Irving
On Sun, Jul 18, 2010 at 11:53:02AM -0700, Linda Walsh wrote:
> 
> from man bash, to define a function use;
> 
> "function" "name" 
>  OR
> "name" () 
> 
> right?
> 
> And Compound Commands are:
> 
>  ( )
>   { ; )
>  (( expression ))
>  [[ expression ]]
> ...et al
> 
> so why do I get a syntax error for
> 
> function good_dir [[ -n $1 && -d $1 && -r $1  && -x $1 ]]
> 
> bash: syntax error near unexpected token `[['

I see this in bash(1):

SHELL GRAMMAR
...
Shell Function Definitions
...
[ function ] name () compound-command [redirection]

and do not see the version you show without the parens.

$ function good_dir() [[ -n $1 && -d $1 && -r $1  && -x $1 ]]
$ good_dir; echo $?
1
$ good_dir /tmp; echo $?
0

Ken




Re: Standard .bashrc needs a minor fix

2010-05-08 Thread Ken Irving
On Fri, May 07, 2010 at 03:03:57PM -0400, Mike Frysinger wrote:
> On Friday 07 May 2010 08:49:26 Greg Wooledge wrote:
> > On Thu, May 06, 2010 at 09:30:20AM -0500, Chuck Remes wrote:
> > > e.g.
> > > [ -z "$PS1" ] && return
> > 
> > That's certainly *not* how I'd write that check.  If the goal is to
> > protect a block of commands from running when the shell is invoked
> > without a terminal, then I'd prefer this:
> > 
> > if [ -t 1 ]; then
> > # All your terminal commands go here
> > stty kill ^u susp ^z intr ^c
> > ...
> > fi
> 
> the somewhat common test ive seen in different distros to detect interactive 
> shells is:
> if [[ $- != *i* ]] ; then
>   # shell is non-interactive
>   return
> fi
> -mike

bash(1) (v4.1) includes:

OPTIONS
...
-iIf the -i option is present, the shell is interactive.

and 

INVOCATION
...
PS1 is set and $- includes i if bash is interactive,  allowing
a shell script or a startup file to test this state.
 
The latter statement is ambiguous, possibly suggesting that PS1 should
also be checked to test for an interactive shell.  This is apparently
not the case, judging by this and other responses in this thread.  I'm 
not sure how to reword it, but if testing for i in $- is sufficient then
this might be clarified.

Ken
-- 
ken.irv...@alaska.edu




Re: Bash manual - interactive shell definition

2010-03-12 Thread Ken Irving
On Fri, Mar 12, 2010 at 11:57:41AM +0200, Pierre Gaston wrote:
> On Fri, Mar 12, 2010 at 11:50 AM, Ken Irving  wrote:
> 
> > On Fri, Mar 12, 2010 at 09:16:05AM +, Marc Herbert wrote:
> > > >> Could this sentence:
> > > >>
> > > >> "An interactive shell is one started without non-option arguments,
> > > >> unless -sis specified, without specifying the
> > > >> -c option, and whose input and error output are both connected to
> > terminals
> > > >> (as determined by isatty(3)), or one started with the -i option. "
> > > >>
> > > >> be any more confusing?
> > > >
> > > > Is seems pretty clearly stated to me.
> > >
> > > Please enlighten us with the priority of English boolean operators.
> > >
> > > I have never seen a natural language sentence with so many boolean
> > operators.
> >
> > Well I can try.
> >
> >An interactive shell is one started without non-option arguments,
> >
> > If there are any arguments then they must be options...
> >
> >unless -s is specified,
> >
> > bash(1) says: "If the -s option is present ... then commands are read
> > from the standard input", which clearly is not interactive.
> >
> 
> If you run "bash -s foo bar" in a terminal it starts an interactive shell.

Maybe the definition isn't correct, then, if your example is at odds
with the first two statements.  The -s is accompanied by foo, and bar
is a non-option argument.I would think that 'foo' would executed as
a command, and the the file bar would be run as a script; I don't see 
how this would be interactive, though.

Ken




Re: Bash manual - interactive shell definition

2010-03-12 Thread Ken Irving
On Fri, Mar 12, 2010 at 12:50:26AM -0900, Ken Irving wrote:
> On Fri, Mar 12, 2010 at 09:16:05AM +, Marc Herbert wrote:
> > >> Could this sentence:
> > >>
> > >> "An interactive shell is one started without non-option arguments,
> > >> unless -sis specified, without specifying the
> > >> -c option, and whose input and error output are both connected to 
> > >> terminals
> > >> (as determined by isatty(3)), or one started with the -i option. "
> > >>
> > >> be any more confusing?
> > > 
> > > Is seems pretty clearly stated to me.
> > 
> > Please enlighten us with the priority of English boolean operators.
> > 
> > I have never seen a natural language sentence with so many boolean 
> > operators.
> 
> Well I can try.
> 
> An interactive shell is one started without non-option arguments,
> 
> If there are any arguments then they must be options...
> 
> unless -s is specified,
> 
> bash(1) says: "If the -s option is present ... then commands are read
> from the standard input", which clearly is not interactive. 
> 
> without specifying the -c option,
> 
> The -c option is accompanied by a string containing the commands to be run,
> so the shell is not interactive.
> 
> and whose input and error output are both connected to terminals (...),
> 
> Without which there'd be nothing for it to interact with.
> 
> or one started with the -i option.
 
I let the previous reply fly before ready...

The -i option would seem to override the other conditions, declaring the
shell to be interactive even if it wouldn't otherwise be.

I'm not saying the sentence is trivial to parse, but I don't see any
ambiguities in the definition.

Ken





Re: Bash manual - interactive shell definition

2010-03-12 Thread Ken Irving
On Fri, Mar 12, 2010 at 09:16:05AM +, Marc Herbert wrote:
> >> Could this sentence:
> >>
> >> "An interactive shell is one started without non-option arguments,
> >> unless -sis specified, without specifying the
> >> -c option, and whose input and error output are both connected to terminals
> >> (as determined by isatty(3)), or one started with the -i option. "
> >>
> >> be any more confusing?
> > 
> > Is seems pretty clearly stated to me.
> 
> Please enlighten us with the priority of English boolean operators.
> 
> I have never seen a natural language sentence with so many boolean operators.

Well I can try.

An interactive shell is one started without non-option arguments,

If there are any arguments then they must be options...

unless -s is specified,

bash(1) says: "If the -s option is present ... then commands are read
from the standard input", which clearly is not interactive. 

without specifying the -c option,

The -c option is accompanied by a string containing the commands to be run,
so the shell is not interactive.

and whose input and error output are both connected to terminals (...),

Without which there'd be nothing for it to interact with.

or one started with the -i option.

This option would seem to override the other 





Re: Bash manual - interactive shell definition

2010-03-12 Thread Ken Irving
On Thu, Mar 11, 2010 at 09:10:11AM -0500, Robert Cratchit wrote:
> On page
> 
> http://www.gnu.org/software/bash/manual/bashref.html#Bash-Startup-Files
> 
> Could this sentence:
> 
> "An interactive shell is one started without non-option arguments,
> unless -sis specified, without specifying the
> -c option, and whose input and error output are both connected to terminals
> (as determined by isatty(3)), or one started with the -i option. "
> 
> be any more confusing?

Is seems pretty clearly stated to me.

Ken





Re: Variable getopts lost

2010-02-23 Thread Ken Irving
On Tue, Feb 23, 2010 at 08:30:16PM +0100, Daniel Bunzendahl wrote:
> ...
> if [ !$LSEITE ]; then 
>  LSEITE=$(pdfinfo $pdf | grep Pages: | sed -e 's/Pages:[[:space:]]//g')
> echo "-l automatisch auf $LSEITE gesetzt"
> fi
> ...
> 
> In the last if-loop LSEITE will be set if LSEITE isn't set.
> This is for no parameters on command-line.
> But how I wrote: It ever works but now it lost the -l 104 ... the -f is 
> no 
> Problem...
> 
> My question wasn't fokused on my wrong script. I think there is something 
> wrong or limited by the System...
> Maybe you can give me a tip I should search for...
> 
> Thanks a lot
> Daniel :-)

Note that 'if [ !$LSEITE ]' becomes 'if [ ! ]' if LSEITE is not set, and
the value is true.  Maybe that's what you want, but a better way is to use
the -n or -z test operators and to *quote the variable expansion*, e.g.,

if [ -z "$LSEITE" ]; then ...

Unquoted variables expand to nothing, and are evaluated as such.
Perhaps there are other such cases in your script?

Ken





Re: $(pwd) != $(/bin/pwd)

2010-01-05 Thread Ken Irving
On Tue, Jan 05, 2010 at 08:23:39PM +0100, Andreas Schwab wrote:
> Greg Wooledge  writes:
> 
> > On Mon, Jan 04, 2010 at 01:25:50PM +, Stephane CHAZELAS wrote:
> >> >> da...@thinkpad ~/foo $ echo $PWD
> >> >> /home/darkk/foo
> >
> >> Well, if I read
> >> http://www.opengroup.org/onlinepubs/9699919799/utilities/pwd.html
> >> correctly, bash pwd should output /home/darkk/bar in that case
> >> as $PWD does *not* contain an absolute path to the current
> >> directory.
> >
> > An "absolute pathname" is one that begins with a / character.  As
> > opposed to a "relative pathname" which does not, and which is resolved
> > relative to your current working directory.
> >
> > $PWD is always an absolute pathname.
> 
> There are two conditions: 1. absolute pathname and 2. to the current
> directory.  The second one is violated.

Just an observation, but the directory can be deleted as well as renamed,
with similar results.  In this case the inode exists but has no name(s)
pointing to it.

Either case (rename or delete) can be handled in scripts, e.g., using -d
$PWD, if this sort of thing is anticipated.

--
ken.irv...@alaska.edu




Re: command_not_found_handle not called if "command" includes a slash

2009-12-29 Thread Ken Irving
On Tue, Dec 29, 2009 at 12:33:25PM -0900, Ken Irving wrote:
> On Mon, Dec 28, 2009 at 03:24:33PM -0900, Ken Irving wrote:
> > On Sat, Dec 26, 2009 at 12:54:47PM -0900, Ken Irving wrote:
> > > Description:
> > > I'm not sure this is a bug, but I notice that the
> > > command_not_found_handle function is not called if the "command" has a
> > > slash in it.  I can't find anywhere in the bash source producing the
> > > "No such file ..."  error message, so I guess this is being caught
> > > somewhere else before bash gets the command line to process.
> > 
> > Fix:
> > 
> > This patch is not sufficient, as it leaves the error message, but it
> > does call the hook function in the problem cases:
> 
> This patch is better, calling the hook function before the error message
> is produced:

Ok, one last go at this, losing some local variables.  

$ diff -u execute_cmd.c{-original,}
--- execute_cmd.c-original  2009-12-28 14:48:46.0 -0900
+++ execute_cmd.c   2009-12-29 14:26:54.0 -0900
@@ -4660,6 +4660,7 @@
   int larray, i, fd;
   char sample[80];
   int sample_len;
+  SHELL_VAR *hookf;
 
   SETOSTYPE (0);   /* Some systems use for USG/POSIX semantics */
   execve (command, args, env);
@@ -4675,6 +4676,11 @@
internal_error (_("%s: is a directory"), command);
   else if (executable_file (command) == 0)
{
+ hookf = find_function (NOTFOUND_HOOK);
+ if (hookf != 0)
+ exit (execute_shell_function (hookf,
+   make_word_list (make_word (NOTFOUND_HOOK), 
+   strvec_to_word_list (args, 0, 0;
  errno = i;
  file_error (command);
}

-- 
ken.irv...@alaska.edu




Re: command_not_found_handle not called if "command" includes a slash

2009-12-29 Thread Ken Irving
On Tue, Dec 29, 2009 at 10:40:04PM +0100, Jan Schampera wrote:
> Ken Irving schrieb:
> 
> >> This patch is not sufficient, as it leaves the error message, but it
> >> does call the hook function in the problem cases:
> 
> I'm just not sure if it makes sense. I mean, if the user requests the
> execution of a *specific file*, what should the hook function do if it
> fails?

That's up to that function to determine, since bash passes control over
to it.  It should be able to handle whatever it gets.  My use case is
to take things that look like 'object.method' -- which are not likely
to collide with normal executables -- and run them under a special handler.
That handler emits an error message and exit code if it can't make sense
of its argument.

Ken

-- 
ken.irv...@alaska.edu




Re: command_not_found_handle not called if "command" includes a slash

2009-12-29 Thread Ken Irving
On Mon, Dec 28, 2009 at 03:24:33PM -0900, Ken Irving wrote:
> On Sat, Dec 26, 2009 at 12:54:47PM -0900, Ken Irving wrote:
> > Description:
> > I'm not sure this is a bug, but I notice that the
> > command_not_found_handle function is not called if the "command" has a
> > slash in it.  I can't find anywhere in the bash source producing the
> > "No such file ..."  error message, so I guess this is being caught
> > somewhere else before bash gets the command line to process.
> 
> Fix:
> 
> This patch is not sufficient, as it leaves the error message, but it
> does call the hook function in the problem cases:

This patch is better, calling the hook function before the error message
is produced:

$ diff -u execute_cmd.c{-original,}
--- execute_cmd.c-original  2009-12-28 14:48:46.0 -0900
+++ execute_cmd.c   2009-12-29 12:20:05.0 -0900
@@ -4660,6 +4660,8 @@
   int larray, i, fd;
   char sample[80];
   int sample_len;
+  SHELL_VAR *hookf;
+  WORD_LIST *wl, *words;
 
   SETOSTYPE (0);   /* Some systems use for USG/POSIX semantics */
   execve (command, args, env);
@@ -4675,6 +4677,13 @@
internal_error (_("%s: is a directory"), command);
   else if (executable_file (command) == 0)
{
+ hookf = find_function (NOTFOUND_HOOK);
+ if (hookf != 0)
+   {
+ words = strvec_to_word_list (args, 0, 0);
+ wl = make_word_list (make_word (NOTFOUND_HOOK), words);
+ exit (execute_shell_function (hookf, wl));
+   }
  errno = i;
  file_error (command);
}

-- 
ken.irv...@alaska.edu




Re: command_not_found_handle not called if "command" includes a slash

2009-12-28 Thread Ken Irving
On Sat, Dec 26, 2009 at 12:54:47PM -0900, Ken Irving wrote:
> Description:
> I'm not sure this is a bug, but I notice that the
> command_not_found_handle function is not called if the "command" has a
> slash in it.  I can't find anywhere in the bash source producing the
> "No such file ..."  error message, so I guess this is being caught
> somewhere else before bash gets the command line to process.

Fix:

This patch is not sufficient, as it leaves the error message, but it
does call the hook function in the problem cases:

$ diff -u execute_cmd.c{-original,}
--- execute_cmd.c-original  2009-12-28 14:48:46.0 -0900
+++ execute_cmd.c   2009-12-28 15:13:22.0 -0900
@@ -4463,7 +4463,17 @@
 leave it there, in the same format that the user used to
 type it in. */
   args = strvec_from_word_list (words, 0, 0, (int *)NULL);
-  exit (shell_execve (command, args, export_env));
+  int execve_result;
+  execve_result = shell_execve (command, args, export_env);
+  if ( execve_result == 127 )
+   {
+ hookf = find_function (NOTFOUND_HOOK);
+ if (hookf == 0)
+ exit (execve_result);
+ wl = make_word_list (make_word (NOTFOUND_HOOK), words);
+ exit (execute_shell_function (hookf, wl));
+   }
+  exit (execve_result);
 }
   else
 {

Ken

-- 
ken.irv...@alaska.edu




command_not_found_handle not called if "command" includes a slash

2009-12-26 Thread Ken Irving
Configuration Information [Automatically generated, do not change]:
Machine: i486
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i486' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i486-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
-DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include -I../bash/lib   -g -O2 -Wall
uname output: Linux hayes 2.6.18-4-686 #1 SMP Wed May 9 23:03:12 UTC 2007 i686 
GNU/Linux
Machine Type: i486-pc-linux-gnu

Bash Version: 4.0
Patch Level: 33
Release Status: release

Description:
I'm not sure this is a bug, but I notice that the
command_not_found_handle function is not called if the "command" has a
slash in it.  I can't find anywhere in the bash source producing the
"No such file ..."  error message, so I guess this is being caught
somewhere else before bash gets the command line to process.

The behavior is not new; a second example is included below from v3.2,
showing the same error message when the bad command looks like a path.

I'd like to dig into this, to see if there's any hope of hooking into
this case in order to provide a handler, but have no idea where to look.
Is there any hope for this?

Repeat-By:
$ export PS1='\$?=$?\n$ '
$?=0
$ 
$?=0
$ kj
-bash: kj: command not found
$?=127
$ ./kjdf
-bash: ./kjdf: No such file or directory
$?=127
$ command_not_found_handle() { cmd="$1"; exec echo "$cmd" "$@"; }
$?=0
$ kj a b c
kj kj a b c
$?=0
$ ./kj a b c
-bash: ./kj: No such file or directory
$?=127
$ bash --version
GNU bash, version 4.0.33(1)-release (i486-pc-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$?=0
$

An example on v3.2:

$ export PS1='\$?=$?\n$ '
$?=0
$ bash --version
GNU bash, version 3.2.39(1)-release (i486-pc-linux-gnu)
Copyright (C) 2007 Free Software Foundation, Inc.
$?=0
$ kj
bash: kj: command not found
$?=127
$ ./kj
bash: ./kj: No such file or directory
$?=127

Thanks!





Re: have bg, fg, but lack stop

2009-12-21 Thread Ken Irving
On Sat, Dec 19, 2009 at 05:48:44PM -0900, Ken Irving wrote:
> kill %1 works for me.  I've puzzled over this before, and I think part of the
> trouble may be that 'jobspec' is not defined in bash(1) (v 3.29).

Ok, I misunderstood the issue, stop vs kill.

Ken

-- 
ken.irv...@alaska.edu




Re: have bg, fg, but lack stop

2009-12-21 Thread Ken Irving
On Sun, Dec 20, 2009 at 10:30:27AM +0800, jida...@jidanni.org wrote:
> Notice how I need two steps to stop this running job:
> 
> $ jobs
> [1]+  Running firefox &
> $ fg
> firefox
> ^Z
> [1]+  Stopped firefox
> 
> As there is no
> $ stop %1
> like command.

kill %1 works for me.  I've puzzled over this before, and I think part of the
trouble may be that 'jobspec' is not defined in bash(1) (v 3.29).

Ken

> 
> OK, I suppose I can use
> 
> $ kill -s SIGSTOP %1
> $
> [1]+  Stopped firefox
> 
> OK, never mind. Market demand too low to add...

-- 
ken.irv...@alaska.edu




Re: Command substitution reduce spaces even in strings

2009-12-08 Thread Ken Irving
On Tue, Dec 08, 2009 at 02:01:23PM +0100, ma...@fiz15.jupiter.vein.hu wrote:
> Configuration Information [Automatically generated, do not change]:
> Machine: i686
> OS: linux-gnu
> Compiler: i686-pc-linux-gnu-gcc
> Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i686' 
> -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i686-pc-linux-gnu' 
> -DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
> -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib  
> -DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
>  -DSTANDARD_UTILS_PATH='/bin:/usr/bin:/sbin:/usr/sbin' 
> -DSYS_BASHRC='/etc/bash/bashrc' -DSYS_BASH_LOGOUT='/etc/bash/bash_logout' 
> -DNON_INTERACTIVE_LOGIN_SHELLS -DSSH_SOURCE_BASHRC -O2 -march=athlon-mp -pipe
> uname output: Linux fiz15.jupiter.vein.hu 2.6.30-gentoo-r8 #1 SMP Thu Nov 12 
> 16:15:30 CET 2009 i686 AMD Athlon(tm) MP 2000+ AuthenticAMD GNU/Linux
> Machine Type: i686-pc-linux-gnu
> 
> Bash Version: 4.0
> Patch Level: 35
> Release Status: release
> 
> Description:
>   [Detailed description of the problem, suggestion, or complaint.]
> 
> The command substitution reduce spaces even in strings.
> This is not healthy on copying files of which names containing double or 
> more spaces generated by command substitution.
> Short example:
> 
> $ echo $(echo "'alfa  beta'")
> 'alfa beta'
> 
> Instead of 'alfa  beta' with double space.
> 
> Repeat-By:
>   [Describe the sequence of events that causes the problem
>   to occur.]
> 
> 
> $ echo $(echo "'alfa  beta'")
> 'alfa beta'
> 
> Instead of 'alfa  beta' with double space.

$ echo "$(echo "'alfa  beta'")"
'alfa  beta'

-- 
Ken Irving, ken.irv...@alaska.edu




Re: qwerty

2009-11-11 Thread Ken Irving
On Thu, Nov 12, 2009 at 01:47:52AM +0100, Antonio Macchi wrote:
> $ printf "%s\n" ok -
> ok
> -
>
> why that score in the newline?

Your pattern has one format operator, but there are two arguments,
so the pattern is applied again.

> 
>
> $ printf "%d %s\n" 1 ok -
> 1 ok
> -bash: printf: -: invalid number
> 0
>
> why getting error here, and not in the previous?
> why "invalid number" ?
> what is that zero?

Again, you have more arguments than operators, so it makes another pass,
and on the second pass tries to format - as a number...  Don't know
about the zero, but I guess %d maybe starts with a default of 0...?

> 
>
> $ printf "%2s\n" qwerty
> qwerty
>
> strings larger than fixed-width are entire written?

It's not fixed width but minimum width.


> I'm using BASH_VERSION 3.2.39, so please forgive me if this issue are  
> already fixed


There's nothing to fix.  It might help if you provide some markers
in your test patterns so you can see where each argument begins and
ends, e.g.,

$ printf "(%d) {%s}\n" 1 ok -
(1) {ok}
-bash: printf: -: invalid number
(0) {}

Ken

-- 
Ken Irving, fn...@uaf.edu, 907-474-6152
Water and Environmental Research Center
Institute of Northern Engineering
University of Alaska, Fairbanks




Re: Bash script file naming problem?

2009-07-25 Thread Ken Irving
On Fri, Jul 24, 2009 at 07:36:31PM -0700, michael rice wrote:
> Is there a problem with naming a bash script file "script"? I'm using Fedora 
> 11.
> 
> ...
> [mich...@localhost ~]$ cat ./bin/script
> #!/bin/bash
> # Sample shell script
> echo "The date today is `date`"
> echo Your shell is $SHELL
> echo Your home directory is $HOME
> echo The processes running on your system are shown below:
> ps
> 
> [mich...@localhost ~]$ script
> Script started, file is typescript

The "which" command is useful to see how a command will be resolved.  If 
you type:

which script

you'll likely see something other than what you're expecting.

Ken

-- 
Ken Irving, fn...@uaf.edu
Water and Environmental Research Center
Institute of Northern Engineering
University of Alaska, Fairbanks




Re: Feature Idea: Enhanced Tab Completion

2009-03-21 Thread Ken Irving
On Sat, Mar 21, 2009 at 01:01:51PM -0400, Cam Cope wrote:
> On Sat, Mar 21, 2009 at 11:48 AM, Chet Ramey  wrote:
> > Cam Cope wrote:
> > > Combine tab completion with history: when you put ! at the beginning of a
> > > command and use tab completion, it displays history results
> >
> > What do you mean by `history results'?
> >
> I'm sorry if the feature has already been implemented, I haven't heard of
> any way to implement it. This is what I was thinking of:
> Right now, if you run history, it will list out all the recently used
> commands, and then you could run !360 to run that history result. Often I'm
> looking for a specific command that I don't want to retype the options for.
> Instead of having to do history | grep commandname and then !###, just start
> typing !commandname and hit tab to see history entries that start with it.

This sounds a lot like what you get with the reverse-search-history 
command, bound to control-r (C-r), a great feature indeed.  

Ken

-- 
Ken Irving, fn...@uaf.edu, 907-474-6152
Water and Environmental Research Center
Institute of Northern Engineering
University of Alaska, Fairbanks




Re: command not found magic

2009-01-12 Thread Ken Irving
On Thu, Dec 04, 2008 at 09:37:58AM +, Stephane Chazelas wrote:
> On Thu, Dec 04, 2008 at 10:16:20AM +0100, Roman Rakus wrote:
> > probably you heard about this topic. It is invoked by ubuntu guys. See 
> > https://launchpad.net/ubuntu/+spec/command-not-found-magic
> > I would like to know, what do you think about it. It needs a small change 
> > in bash.
> [...]
> 
> A note about it. If ever it makes it to bash, maybe a good idea
> would be to call the handler command_not_found_handler instead
> of command_not_found_handle as it seems to make more sense (at
> least to me) and that's what zsh has chosen (for the same
> reason). See http://www.zsh.org/mla/workers/2007/msg00592.html
> So that one could use the same rc file for bash and zsh.

I'd second that.  Here's a patch for bash-4.0-rc1:

--- config-top.h.orig   2009-01-12 13:41:45.0 -0900
+++ config-top.h2009-01-12 13:54:46.0 -0900
@@ -96,5 +96,5 @@
 
 /* This is used as the name of a shell function to call when a command
name is not found.  If you want to name it something other than the
-   default ("command_not_found_handle"), change it here. */
-/* #define NOTFOUND_HOOK "command_not_found_handle" */
+   default ("command_not_found_handler"), change it here. */
+/* #define NOTFOUND_HOOK "command_not_found_handler" */

-- 
Ken Irving, fn...@uaf.edu, 907-474-6152
Water and Environmental Research Center
Institute of Northern Engineering
University of Alaska, Fairbanks




Re: command not found on remote server

2008-12-11 Thread Ken Irving
On Thu, Dec 11, 2008 at 03:35:33PM -0700, Bob Proulx wrote:
> Paul Jarc wrote:
> > Bob Proulx wrote:
> > > Also, using full paths is frowned upon.
> > 
> > You mean invoking /directory/some-command directly instead of
> > PATH=$PATH:/directory
> > some-command
> > ?
> ...
> ... I was actually commenting on a previous suggestion earlier in
> the thread that full paths be used.  I was compelled to object to that
> suggestion.
> 
> I have seen too many scripts that people write with full paths
> *thinking* that they are making the script stronger when in reality
> they are making it more fragile.  More fragile because they are then
> fixing the script into the rigid framework of a particular system.
> That is the case I am frowning upon.

I've often used hard-coded paths, and appreciate your argument and
reasoning.  Makes sense to me, and I'll try to change my ways.  Thanks!

Ken

-- 
Ken Irving




Re: how to know if a command was successful on remote server

2008-12-10 Thread Ken Irving
On Wed, Dec 10, 2008 at 08:48:57AM -0800, Dolphin06 wrote:
> 
> Hello i m sending command to remote server, in my script on my local machine.
> I would like to know how can i return a value if the command on the remote
> server failed.
> on my script on local machine :
> 
> #! /bin/bash
> 
> #how can i get a returned value from this ?
> ssh [EMAIL PROTECTED] remotescript param

>From ssh(1):

 ssh exits with the exit status of the remote command or with 255 if an
     error occurred.

Ken

-- 
Ken Irving




Re: command not found magic

2008-12-04 Thread Ken Irving
On Thu, Dec 04, 2008 at 08:45:48AM -0500, Chet Ramey wrote:
> Roman Rakus wrote:
> > Hi,
> > probably you heard about this topic. It is invoked by ubuntu guys. See
> > https://launchpad.net/ubuntu/+spec/command-not-found-magic
> > I would like to know, what do you think about it. It needs a small
> > change in bash.
> 
> This will be in bash-4.0, with some improvements.

Another wish-list item for that would be to pass the complete argument
list to the handler.  The current patch for this (in debian, same as
ubuntu) only provides the failed command name, no arguments.

Thanks,

Ken
-- 
Ken Irving, [EMAIL PROTECTED], 907-474-6152
Water and Environmental Research Center
Institute of Northern Engineering
University of Alaska, Fairbanks




Re: "C-z bg" without the lurch

2008-11-25 Thread Ken Irving
On Wed, Nov 26, 2008 at 09:51:13AM +0800, [EMAIL PROTECTED] wrote:
> There are many times one has not planned ahead, and forgets the &:
> $ emacs -nw important.txt #then after a half an hour of editing:
> ^Z
> [1]+  Stopped emacs -nw important.txt
> $ compact_disk_burner_GUI_application #forgot to add &
> 
> OK, we want to get back to emacs, but we dare not stop the
> compact_disk_burner lest we ruin the burn. No not even for the split
> second a "^Z bg" would take.
> http://groups.google.com/group/comp.unix.shell/browse_thread/thread/e69b7bf5eddd68ca
> Sure, "next time don't use -nw", "killall -1 emacs, your file will be
> in #important.txt#.
> Anyway, I wish there was a way to communicate a "disown" command or
> something to that shell. stty -a shows a lot of weird keys. Anyway, it
> would be neat if there was a key e.g., C-y, that would "have the
> effect of C-z bg, but without ever letting the job in question feel
> the brief sting of being stopped.
> 
> Anyway, how could it be that the mighty bash can't let me get back to
> my emacs without lurching my CD burning job, even for a split second?

It sounds like you're not running bash, you're running the cd burner
application.  Bash exec'd it like you asked it to and is no longer 
in the picture.  If it supports C-Z, then maybe it can do what you
want?

-- 
Ken Irving




patch for bash 3.2 manpage parameter expansion operator summaries

2008-10-25 Thread Ken Irving
Configuration Information [Automatically generated, do not change]:
Machine: i486
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i486' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i486-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
-DHAVE_CONFIG_H   -I.  -I../bash -I../bash/include -I../bash/lib   -g -O2 -Wall
uname output: Linux hayes 2.6.18-4-686 #1 SMP Wed May 9 23:03:12 UTC 2007 i686 
GNU/Linux
Machine Type: i486-pc-linux-gnu

Bash Version: 3.2
Patch Level: 39
Release Status: release

Description:
While the first few parameter expansion operator entries include a
brief summary of what the entity does (in bold), the last several
do not.  One has to read and parse quite a bit of text to even
get a hint of what some of the operators do.  Entry D3 in the
FAQ, though, does show summaries for a few of those operators,
and I thought the bash(1) manual page would benefit by the
addition of similar summaries.

Repeat-By:
man bash
/^ *Parameter Expansion
...

Fix:
I'll try to include a patch at the bottom of this report, below,
generated by diff -u.

FYI, following are the entries with just the added text summaries
shown:

${!prefix*}
[EMAIL PROTECTED]
   Expand to names matching prefix.

[EMAIL PROTECTED]
${!name[*]}
   Expand to array indices/keys.

${#parameter}
   Parameter length.

${parameter#word}
${parameter##word}
   Remove smallest or largest prefix pattern.

${parameter%word}
${parameter%%word}
   Remove smallest or largest suffix pattern.

${parameter/pattern/string}
   Pattern substitution.

Thanks,

Ken

$ cat ~/tmp/bash-manpage-fix/bash-3_2-manpage-hints.patch
--- bash.1  2008-10-24 09:49:30.0 -0800
+++ ki-bash.1   2008-10-24 10:40:05.0 -0800
@@ -2437,6 +2437,7 @@
 .TP
 [EMAIL PROTECTED]
 .PD
+\fBExpand to names matching prefix\fP.
 Expands to the names of variables whose names begin with \fIprefix\fP,
 separated by the first character of the
 .SM
@@ -2448,6 +2449,7 @@
 .TP
 ${\fB!\fP\fIname\fP[\fI*\fP]}
 .PD
+\fBExpand to array indices/keys\fP.
 If \fIname\fP is an array variable, expands to the list of array indices
 (keys) assigned in \fIname\fP.
 If \fIname\fP is not an array, expands to 0 if \fIname\fP is set and null
@@ -2456,6 +2458,7 @@
 key expands to a separate word.
 .TP
 ${\fB#\fP\fIparameter\fP}
+\fBParameter length\fP.
 The length in characters of the value of \fIparameter\fP is substituted.
 If
 .I parameter
@@ -2477,6 +2480,7 @@
 .TP
 ${\fIparameter\fP\fB##\fP\fIword\fP}
 .PD
+\fBRemove smallest or largest prefix pattern\fP.
 The 
 .I word
 is expanded to produce a pattern just as in pathname
@@ -2509,6 +2513,7 @@
 .TP
 ${\fIparameter\fP\fB%%\fP\fIword\fP}
 .PD
+\fBRemove smallest or largest suffix pattern\fP.
 The \fIword\fP is expanded to produce a pattern just as in
 pathname expansion.
 If the pattern matches a trailing portion of the expanded value of
@@ -2535,6 +2540,7 @@
 array in turn, and the expansion is the resultant list.
 .TP
 ${\fIparameter\fP\fB/\fP\fIpattern\fP\fB/\fP\fIstring\fP}
+\fBPattern substitution\fP.
 The \fIpattern\fP is expanded to produce a pattern just as in
 pathname expansion.
 \fIParameter\fP is expanded and the longest match of \fIpattern\fP





Re: piping output is delayed

2008-09-28 Thread Ken Irving
On Sun, Sep 28, 2008 at 08:29:46AM +0100, martin schneebacher wrote:
> 
> i have the same setup on a laptop and a desktop ps. while it works on
> the laptop the redirecting to the textfile hast a too long delay on
> the desktop pc (same dist, same binaries).
>
> meanwhile i found a explanation at
> http://osdir.com/ml/linux.suse.programming-e/2005-08/msg00030.html i
> changed the main program (that one with the output) so that it writes
> direct into a textfile instead to stdout and this solves my problem.
> but i still don't know if it's possible and how i can modify the
> output buffer to stdout.

Try comparing terminal settings on each platform, e.g., with stty -a,
and look into any differences.

Ken

-- 
Ken Irving, [EMAIL PROTECTED], 907-474-6152
Water and Environmental Research Center
Institute of Northern Engineering
University of Alaska, Fairbanks




Re: [PATCH] command-not-found-handle

2008-04-09 Thread Ken Irving
On Sun, Apr 04, 2004 at 05:23:36PM +0200, Michael Vogt wrote:
> Dear Friends,

> I attached a small patch that adds a "command-not-found-handle" for
> bash 2.05b.0(1)-release. If a given command is not found and the
> function "command-not-found-handle" is definied, it is called with the
> faild comand as parameter. If it's not defined the behaviour is
> unchanged. 
> 
> Please tell me what you think about the patch and if there is a chance
> for it to get integrated into bash. See [1] for some rational why this
> might be usefull.
> 
> thanks,
>  Michael
> 
> [1] http://www.geocrawler.com/mail/msg.php3?msg_id=2795796&list=342

This patch (or very similar) was accepted into debian's bash some time
ago, though apparently not upstream.  I'm interested in extending this
patch to include the command arguments; currently only the command
string itself is passed to the "hook" function.

In the patch, shell variables f and v are declared, but v is not used;
my guess is that that may have been intended for the argument list, but
for some reason was dropped.  I haven't yet figured out how to make this
work, but will keep trying...

The referenced link [1] to some rational hasn't aged well, and I'd be
interested in seeing any discussions of this feature.

My interest is to "extend" bash to support object oriented commands,
which I currently have to explicitly run through a handler; e.g., to use:

  $ object.do_something args...

instead of:

  $ tob object.do_something args...

where the tob script parses object and method, resolves the method to
some executable via path-like information associated with the object,
and then executes the equivalent of

  $ $class_path/do_something object args...

I think this should be relatively safe, since there are probably few
executables which include a dot in their name.

Finally, I thought it should be possible to find the command line with
arguments under /proc/$pid, but no joy there either; if that was the 
case, the command_not_found_handle function could perhaps reconstruct 
the full command in order to pass to the tob handler.  I'm running debian
stable and unstable systems, e.g.,

  $ uname -a
  Linux hayes 2.6.18-4-686 #1 SMP Wed May 9 23:03:12 UTC 2007 i686 GNU/Linux

Thanks for any thoughts, ridicule, etc., on this.

Ken

> --- execute_cmd.c.orig2004-04-04 15:40:18.0 +0200
> +++ execute_cmd.c 2004-04-04 16:41:33.0 +0200
> @@ -3270,8 +3270,23 @@
>  
>if (command == 0)
>   {
> -   internal_error ("%s: command not found", pathname);
> -   exit (EX_NOTFOUND);   /* Posix.2 says the exit status is 127 */
> +SHELL_VAR *f, *v;  
> +WORD_LIST *cmdlist;
> +WORD_DESC *w;
> +int fval;
> +f = find_function ("command-not-found-handle");
> +if (f == 0) {
> +   internal_error ("%s: command not found", pathname);
> +   exit (EX_NOTFOUND);   /* Posix.2 says the exit status is 127 
> */
> +}
> +w = make_word("command-not-found-handle");
> +cmdlist = make_word_list(w, (WORD_LIST*)NULL);
> +
> +w = make_word(pathname);
> +cmdlist->next = make_word_list(w, (WORD_LIST*)NULL);
> +
> +fval = execute_shell_function (f, cmdlist);  
> +exit(fval);
>   }
>  
>/* Execve expects the command name to be in args[0].  So we

-- 
Ken Irving, [EMAIL PROTECTED], http://sourceforge.net/projects/thinobject/
Hi=~/lib/thinob/Try/Hi; mkdir -p $Hi; ln -s /usr/local/lib/thinob $Hi/^
hi=$(tob Try/Hi.tob hi); echo -e '#!/bin/sh\nshift\necho Hello $*!'>$hi
chmod +x $hi; mkdir say; ln -s $Hi say/^; tob say.hi world