Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread Phi Debian
Oops I forgot the reply to all in my reply to @David Hedlund


I was on the same tune as you @g...@wooledge.org .

==
On Fri, Jul 12, 2024 at 12:14 AM David Hedlund  wrote:

>
>
> When a directory is deleted while the user is inside it, the terminal
> should automatically return to the parent directory.
>
>
Jeez, this looks like pretty dangerous isn't it, imagine a script (or a
fast typer for interactive shell) the is doing some bookeeping inside a
directory, (i.e removing some files in the $PWD), and $PWD is unlinked by
another random process, then your script (or fast typer, who type and read
afterwards) would then continue its bookeeping in the parent (of login
shell dir?). It make no sense.

So when a $PWD is unlinked while working in it, the only things a shell can
do is to emit a message like

$ rm -rf $PWD
$ >xyz
bash: xyz: No such file or directory

Now you may argue this is not explicit enough, nothing in there tells you
that the $PWD was unexpectedly unlinked. You may implement something more
explicit like this for interactive shell, for script simply bail out on
error.

$ PS1='$(ls $PWD>/dev/null 2>&1||echo $PWD was unlinked)\n$ '

$ pwd
/home/phi/tmp/yo

$ ls
asd  qwe  rty

$ rm -rf $PWD
/home/phi/tmp/yo was unlinked
$ ls
/home/phi/tmp/yo was unlinked
$ cd ..

$ mkdir -p yo/asd ; >yo/qwe >yo/rty ; cd yo

$ pwd
/home/phi/tmp/yo

=

Bottom line, this is not a bug, and there is no enhancement request here.

Or 'may be'
on bash error
bash: xyz: No such file or directory
the full path could be expanded

bash: /home/phi/tmp/yo/xyz: No such file or directory

But even that I am not sure.


Re: proposed BASH_SOURCE_PATH

2024-07-08 Thread Phi Debian
@Greg, @Martin +1, lost sight of the feature, and xing fingers that current
semantic/behavior is not destroyed, @Oğuz -1 I'd like to avoid fixing
script that run today just because bash was updated or I would advocate
distros to keep a frozen bash as macos did.


Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-21 Thread Phi Debian
On Tue, May 21, 2024 at 1:17 PM Koichi Murase 
wrote:


> There are already shell-function implementations at
> /examples/functions/autoload* in the Bash source. They reference FPATH
> to load functions, though one needs to call `autoload' for each
> function in advance (by e.g. `autoload "$fpath_element"/*' ).
>

> However, I personally do not think the FPATH mechanism is useful
> because a file can only contain one function per file. Significantly
> non-trivial functions are usually implemented by a set of helper
> functions or sub-functions. Also, in libraries, we usually have a set
> of functions that are closely related to one another and share the
> implementations. I don't think it is practical to split those
> functions into dozens or hundreds of files. It would also be slow to
> read many different files, which requires access to random positions
> on the disk.
>

Jeez, I forgot about this examples/functions/autoload.

First of all thanx for pointing it, it demonstrate that things can be
implemented as bash code (not C hack in bash) at least in the experimental
phase.

Second, there is no such limitation about 1 source 1 function mapping,
since it is sourced any functions defines in the files become available.
This open two path.

1 package setup function is autoloaded bringing all the other function that
are now callable

Say file foo defines foo() bar() then autoload foo() brings both foo() and
bar(), if foo is the setup thing for file foo then the doc sez autoload foo
to bring up bar() along.

Another path is to have a symlink on each 'exported' function on the file
foo, so a ln -s foo bar would make bar autoloadable. Note I used 'exported'
here meaning the foo file may define a lot of function, only the exposed
API one need a symlink.

The real divergence from this autoload, and the ksh93 one is that ksh93
will try an autoload on foo using FPATH on the 'command-not-found'
situation, something bash don't handle in the user context (it does it in a
subshell for unknown reasons).
So basically in ksh93 script, beside setting FPATH, you simply call
functions directly, on command-not-found then try FPATH to find your
function and if found proceed with the load, define func, run setup code,
and run the func with parameters...

This is neat, but the fact we don't have that on bash is no big deal, we
can live without it, again no need to break things that works.


Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-21 Thread Phi Debian
On Tue, May 21, 2024 at 1:15 PM Greg Wooledge  wrote:

> On Tue, May 21, 2024 at 10:12:55AM +, Matheus Afonso Martins Moreira
> wrote:
> > > the schizophrenic nature of the feature
> >
> > First the feature was "irritating" ... Now it's "schizophrenic" ?
> > I must be mentally ill for trying to contribute this?
> >
> > Yeah, I'm done.
>
> I don't think "schizophrenic" was used as an insult.  Rather, it looks
> like an attempt to describe the fact that everyone in the thread has a
> different concept and/or experience regarding how 'source' and '.'
> should work.
>
>
Well, what I really meant was that there are 2 directions one with
read/parse the other read/parse/eval and not being able to make a mind,
like the Buridan ass
https://en.wikipedia.org/wiki/Buridan%27s_ass

'import' read/parse (along with a PATH var) make sense and don't break
interfere with source

A 'source' read/parse/eval is there with its semantic and should not go
away.

An 'import' with source look alike and tweaking its flags and semantic is
border line, besides, it and can be implemented in shell only with no bash
modification, there is no perf consideration here, we are not on the perf
path, unless one come up with a source in the inner loop :-)

No offense, just pointing that these two path sounds not mixable, an
implementation of your $BASH_SOURCE_PATH can be all done in shell in one of
your libs, (say your bootstrap lib) whence this one is loaded, it can load
all the others.


Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-20 Thread Phi Debian
On Mon, May 20, 2024 at 7:54 PM Greg Wooledge  wrote:

> On Mon, May 20, 2024 at 07:43:10PM +0200, Andreas Kähäri wrote:
> > On Mon, May 20, 2024 at 05:31:05PM +, Matheus Afonso Martins Moreira
> wrote:
> >
> >   PATH=${BASH_SEARCH_PATH-$PATH} . file
> >
> > without the need to add any options to . or to source.  But maybe that
> > too pedestrian?
>
> Are we going in circles yet?  This would clobber the value of PATH for
> the duration of sourcing "file", which would potentially cause commands
> in "file" to break.
>
> hobbit:~$ cat bar
> echo hi | cat
> hobbit:~$ PATH=. source bar
> bash: cat: command not found
>

Yes we are probably circling because of the schizophrenic nature of the
feature.

At the beginning, I think not sure I remember well, the whole idea was to
get bash 'libraries' described as bringing more functions written by other
into a runnable (-x) script. This was perceived (I guess) as an 'include'
but since it was function may be as a 'link' or both combined, and as such
required a special treatment ala -I -L compiler like using INC_PATH,
LIB_PATH kind of thing.

Some thought 'source' was a good candidate for this, except that source is
a read/parse/eval feature while the intention was more a read/parse for the
benefit of the 'importer'

so construct like
hobbit:~$ PATH=. source bar

is perfectly ledgit if we are honest and admit bar is an import only and
execute no code, simply bring functions.

So PATH=$BASH_SOURCE_DIR . greatlib

is the way to go, still assuming libs are functions/vars only and no code
execution.

As soon as the 'library' concept assume that eval is also possible, along
with read/parse, then kaboom we need PATH back so the
PATH=$PATH:/path/to/lib .

Real life package actually require the eval, because it is the way to
configure a 'library' depending the user context.

So back to square one, what we need is source as it is and should not be
broken.

Apparently konsolebox wrote a package manager, and survived the source 'as
is', yet he still advocate for 'enhancement'. I do have a package manager
too, and survived source 'as is' and don't require any enhancement.

'May be' bash could investigate the ksh93/zsh $FPATH autoload, but don't
know if that would be good enough for the initial purpose.

Don't know if that FPATH thing was investigated before, if so, why it was
rejected. If good enough and could be adopted, it would be some convergence.


Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-19 Thread Phi Debian
On Sun, May 19, 2024 at 10:58 AM Matheus Afonso Martins Moreira <
math...@matheusmoreira.com> wrote:

>
> This is not a problem that's introduced by this patch.
> People can already do that today. Anyone could write
> `alias source='rm -rf --no-preserve-root /*'` right now,
> nothing stops them. So I don't see why this would be
> a reason against inclusion of this feature.
>

I was not talking about breakage of a script, anybody knows you can stick
an rm -rf anywhere in your script (BTW protection/containers,... mitigate
the damages). I was talking of two incompatible package manager, where one
can break the other, yours can break mine, mine can't break yours, I was
asking for a way to secure the old (current) way for mine.

So far I saw typeset -r BASH_SOURCE_PATH='' in my init path, but not even
sure it will be enough, beside I don't control my users who may miss the
init update but still can mix'n'match the package managers.

Here, we are not talking adding one more feature into bash, we are talking
about one feature predate (intentionally or not) another one, it is normal
that some head's up occurs.


> I understand your concern though. Perhaps we can find
> ways to work around it or fully address it.
>
Not too sure about that.


>
> > Another things that bugs me, why not implementing your invention in a
> > dynamically loaded builtin, advertise it on github, and let the one who
> > really needs this great thing be their choice download it, and use it.
>
> I don't really want to maintain such a thing. I'm struggling to find time
> to work on my personal projects as it is. We are lucky to have someone
> who maintains bash and who is considering this feature for addition.
>

Strange argument, you want to forward the hot potatoes to someone else
regarding maintenance.

First of all you claim the feature is tiny, almost bug free in its
inception, secondly it is vastly needed.

If those assertion are true, then your maintenance workload is almost nil
for your feature you can always access on top of bash with dynload.

If there is a vast need for it, and if unfortunately a bug shows up, then
in this vast population, there will be a good soul providing a pull request
for it, your load would be minimal.

That's the magic of open software :-)

So I think a dynloadable builtin of yours would be a perfect use case of
dynload, and a good way to challenge your invention, i.e measure the vast
population, bug rate, etc... I think it is a good 'preview' way, before
eventually move from dyn load to static load one day. Dynloadable builtins
exist, use it.



> I don't want to fork bash. I want to be able to download the latest version
> with my package manager and get access to this.


It is perfectly doable, clone, configure, build, stick you patch and
provide it to you (or your users) not a big deal of effort.
You can have that today, all that hidden in... bingo a library :-)



> If I had to fork bash to
> get this, I would simply give up and continue working on my own
> programming language instead.
>

I think this is a great valuable idea!

PS : On the citations side, I remember one, a software is bug free when you
trash it :-)


Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-19 Thread Phi Debian
On Sun, May 19, 2024 at 1:17 AM Matheus Afonso Martins Moreira <
math...@matheusmoreira.com> wrote:

>
> Those should continue to work without any changes.
> They should even be compatible with each other:
> there should be no problems with sourcing -i a script
> which then sources another script normally or uses
> one of the module management solutions.
>
>   -- Matheus
>

Not true, the hack to break the whole thing is

old source.sh relying on the current {source,.,$PATH} behavior
The source.sh is alive (maintained)

One come along a say look I got this neat library to colorize stuff, it
depend on a couple of other library (meaning they do source -i inside) the
library (the top one may be) decide that source -i is the general way to go
a decide a despotic alias source='source -i' this could be a general setup
of this package manager.

Back to good'ol source.sh now any 'source' it does are biased.

I agry that true to for other shell options/variable,
'libraries/packages/imports' can break the whole thing too though (IFS is a
good one) but so far we managed :-)

So at the end, the poor one that rely on actual {source,.,$PATH}would have
to warn in its coding convention, never ever source a .sh that rely on
BASH_SOURCE_PATH,
I know a grep would suffice to enforce it, but there is no reasons to force
that.

Another things that bugs me, why not implementing your invention in a
dynamically loaded builtin, advertise it on github, and let the one who
really needs this great thing be their choice download it, and use it.

I personally have my own bash and own ksh for our internal needs, we
maintain them, and we have our own set of internally crafted builtins, I
would not dare push them to shell maintainers, fearing breaking all who
depend on bash/ksh/dash whatever for their OS startup, init script,
configure script etc...


Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-18 Thread Phi Debian
On Sat, May 18, 2024 at 3:23 PM Matheus Afonso Martins Moreira <
math...@matheusmoreira.com> wrote:

>
> > That would cause shell scripts to see it and exhibit a change in
> behavior.
>
> Only if the -i option is not implemented.
> If it is, there will be zero change in behavior
> whether BASH_SOURCE_PATH is set or not.
> Only if the script explicitly passes -i to source
> will the behavior change.
>

I am xing fingers the consensus, if any of theses takes off, is to have the
-i implemented (not omitted) along with a protection against alias
source='source -i'

I really depend on actual 'source' behavior, and can't update my packages
(distributed since long). I don't plan to use BASH_SOURCE_PATH, so if one,
distro, sysadmin, and ultimatly the user fiddle with BASH_SOURCE_PATH in
the ENV (*rc files), changing the good'ol 'source' behavior, it would be a
serious regression to me.

So to me BASH_SOURCE_PATH enablement must be guarded by a never used yet
option to 'source or dot' (here -i apparently) and an enforcement that no
alias source='source -i' could ever be possible.

All this is because one writing a 'main' script (#!/bin/bash), sourcing a
package of mine through my package management system, expecting the current
source behavior, and later add the loading of a 'libraries' from a friend
(terminal color jazz) that in turn start to mess around with alias
source='source -i' and BASH_SOURCE_PATH cold start to break my way of
finding my packages.

An alternative to this would be
BASH_SOURCE_PATH="" don't do nothing with 'source', and allow the
'libraries' designer to do
typeset -r BASH_SOURCE_PATH=""
in the early step of the main script, so nobody will ever be able to do
anything clever with BASH_SOURCE_PATH, or could add import from
incompatible packages system. But even this idea is a pb to me because it
would require a change an my side that is not really doable.

Well in short don't break the old source semantic.


Re: [PATCH 0/4] Add import builtin

2024-05-07 Thread Phi Debian
Ok thanx for clarification.

I agry about git{lab,hub} position that's why I added {,whatever} meaning
something really free.

Ok now all is clear and I will not be surprised anymore if some other
'feature' proposal shows up in the future.


Re: [PATCH 0/4] Add import builtin

2024-05-06 Thread Phi Debian
On Mon, May 6, 2024 at 7:51 PM Kerin Millar  wrote:

>
>
> I'll put it in a little more detail, though no less plainly. I find the
> terminology of "libraries" and "modules" to be specious in the context of a
> language that has no support for namespaces, awkward scoping rules, a
> problematic implementation of name references, and so on. These
> foundational defects are hardly going to be addressed by a marginally more
> flexible source builtin. Indeed, it is unclear that they can be - or ever
> will be - addressed. Presently, bash is what it is: a messy, slow, useful
> implementation of the Shell Command Language with an increasing number of
> accoutrements, some of which are fine and others of which are less so (and
> virtually impossible to get rid of). As curmudgeonly as it may be to gripe
> over variable and option names, this is why the import of library, as a
> word, does not rest at all well in these quarters. That aside, I do not
> find the premise of the patch series to be a convincing one but would have
> little else to say about its prospective inclusion, provided that the
> behaviour of the posix mode were to be left unchanged in all respects.
>
> --
> Kerin Millar
>
>
Thanx @Kerin, I got an intuitive reluctance with the patch series, but
could not formalize it that way, that is exactly it (specially the nameref
to me :-))

That brings up some questioning about the bash dev workflow. I personally
only monitor bash-bug (not the others bash-*), specially to be aware of new
incoming patches.

Generally, the few patch that shows up here are patches that fix a specific
standing bug, or on some occasion, the declaration of a bug along with a
patch to fix it, they are generally reply with 'thanx for the report and
patch!'

I rarely see patches about "Hey guys I got this great idea what do you
think", so I wonder for this kind of request does git{lab,hub,whatever} be
more appropriate? or may be I should say convenient, a public git project
(clone) generally have all the infrastructure for discussions, enhancement
and fix.

The bash code owner can then pull the great idea when the demand for it
start roaring?

Just questioning, I really have no idea what is the bash dev team way of
receiving enhancement request.


Re: Re: [PATCH 0/4] Add import builtin

2024-05-06 Thread Phi Debian
On Mon, May 6, 2024 at 7:28 AM Matheus Afonso Martins Moreira <
math...@matheusmoreira.com> wrote:

> Yet the feature has been described as "irritating"!
> I really don't understand the cause for this
> and it's making me feel really unwelcome.
>

I think it is not personnal, you proposed something, and other told you
what you propose is overkill due to the simplicity to do the same with no
change in bash, and a 1liner of code in your top level project file.

Your proposition could not be 'builtin', you want a new 'model' with
'packages/module' (not saying library on purpose, way too much for me).
Starting from there your model need at least an init line to say you want
bash operating in your model 'packages/module'. Form there your new model
can be initialized with a 1liner, either a dynloadable builtin (.so file
you provide), or an init file (.sh no +x) you source. At this point
whatever technic, you have your 'import'. Is that big of a deal to ask your
developers to comply with your model and init your 'packages/module'
paradigm bringing 'import' in a 1 line in their top script (here script
mean +x executable bash source file).

TBH, I have a packages system for both bash and ksh since decades, they all
start by an init line at top level, and they never required any shell hack,
and they do far more that intended here, repos, versioning, dependencis,
multi-arch (run time context) etc...

I think it is very good to be enthusiastic about hacking a free software, a
good start point is to fix bugs, even tiny one, even simple docco.

Another option is to create a fork of a project (here bash) implement your
hack, provide it (github kind) then see the stats, if your new bash
skyrocket, may be your hack will be back ported. You could  adverstise your
new bash in stko, each one asking for a package manager in bash you could
reply with your new invention, boosting the stats.

All that to say, I am not too sure that distro (and installer of all sort)
will be willing to get a fatter bash for higher risk of bugs, hack, secu,
name it, and would surely ask this feature to be optional, then always
requiring an init line to enable it.

So I guess it will take a long time before seeing that in the shell, but I
may be wrong.


Re: Re: [PATCH 0/9] Add library mode to source builtin

2024-05-05 Thread Phi Debian
On Mon, May 6, 2024 at 5:43 AM Matheus Afonso Martins Moreira <
math...@matheusmoreira.com> wrote:

> > I think every single use of the term "library" in this whole endeavor
> > is misleading and misguided.
>
> I don't think so. A library is just a logical collection of code and data
> that you can use to implement software. This is a mechanism meant
> to load those libraries separately from executables.
>

Not in my parlance. Libraries are 'compiled collection of object module',
can be static/dynamic, dynamic imply a runtime load. This is valid for
interpreters with JIT compiler that can produce librairies. Bash do have
libraries by means of loadable .so files, .so files can legitimatly called
libraries. Those libraries do have their own PATH thing (i.e
LD_LIBRARY_PATH)

A 'script' is a bash source file with the +x bit set (executable), and as
such don't have extension like 'foo'. A non executable bash file, better
with a .sh extension, is a bash source.

The source builtin just load 'file', albeit .sh file, not library.


Re: [PATCH 0/4] Add import builtin

2024-05-05 Thread Phi Debian
On Sun, May 5, 2024 at 9:47 PM Greg Wooledge  wrote:

> On Sun, May 05, 2024 at 03:32:04PM -0400, Lawrence Velázquez wrote:
>
>
> The idea to add a BASH_SOURCE_PATH variable that gets searched before, or
> instead of, PATH when using the source builtin -- that sounds good.  I
> don't really understand the concepts behind the rest of the discussion.
>
>
Well even this is unclear 'BASH_SOURCE_PATH get searched before PATH' or
'BASH_SOURCE_PATH get searched instead of' or even 'BASH_SOURCE_PATH get
searched after PATH'

Each have valid reasons to exist...

Anyway if this is the only desire then a simple

In module system init .sh (used by .rc or other .sh)
IPATH=... ; alias xyz_import ='PATH="$IPATH" source' ; alias xyz_type
='PATH="$IPATH" type'

Above the setup is for 'instead of PATH', 'PATH="$IPATH:$PATH" source' for
before PATH and so on.

Then you can use xyz_import foo.sh or xyz_type foo.sh

BTW I didn't red all the patches, but I wonder what the semantic of the
builtin 'type' becomes in your implementation, I guess that on top of
currently defined concepts (file, function, alias, builtin) you now add
'library (better name needed)' with all the docco and jazz about it.

The risk/profit (or even ROI) seems not favorable, i.e the implementation,
the QA tests, the docco.

For something that can be done in a 1 liner in an .rc file, I am really not
convinced.


Re: Re: Re: [PATCH 0/4] Add import builtin

2024-05-05 Thread Phi Debian
On Sat, May 4, 2024 at 4:44 AM Matheus Afonso Martins Moreira <
math...@matheusmoreira.com> wrote:

>
> By "library system" I just mean the exact mechanism through which
> bash will load a "library". By "library", I mean ordinary scripts
> whose purpose is to collect related functions and variables
> for reuse by other scripts or interactive bash users.
>

May be that's why the term 'library' is not very well suitable as there are
other ways to load functions and variables in a shell context, i.e shared
libraries/objets dynamic builtins, so the 'import got to cover all of none.
In this case the import would find the shared libs in the restricted path
IMPORT_PATH along with the scripts, and then a 'library' could either be
text script or binary .so a.out.

Sounds more like a pandora box :-)



> By introducing the "module/library system" I want to do the following:
>
>   1. Add a builtin primitive that can be used to load libraries
>   2. Establish a convention for where bash will look for libraries
>   3. Separate the libraries from the commands/executables
>

If its all that simple may this 3 liners to stick in your.rc file or hide
it in yet another . findable script

shopt -s expand_aliases # Some may dislike this
xyz_import='for xyz in ${XYZ_IMPORT_PATH//:/\/$1 }/$1;do xyz=$(realpath -q
$xyz);[ -r "$xyz" ]&&. $xyz&done'
alias xyz_import='source /dev/stdin <<< "$xyz_import"'

1. It create a new primitive (well a command in shell parser vocable)
xyz_import
2. It Establish a convention for where bash will look for libraries
XYZ_IMPORT_PATH
3. Separate the libraries from the commands/executables by looking in the
$XYZ_IMPORT_PATH dir set only

Murphy's law

Note the I namespaced all this with xyz/XYZ, change for your taste, but
beware of other 'import'

While testing this I discover (and forgot) that I installed the
'imagemagick' package (linux debian) which in turn setup an alternative
path on 'import'

$ ll /usr/bin/import
lrwxrwxrwx 1 root root 24 Feb 26 07:43 /usr/bin/import ->
/etc/alternatives/import

$ ll /etc/alternatives/import
lrwxrwxrwx 1 root root 23 Feb 26 07:43 /etc/alternatives/import ->
/usr/bin/import-im6.q16

So I guess it is wise to stay away from 'import' name.


Re: [PATCH 0/4] Add import builtin

2024-05-03 Thread Phi Debian
On Fri, May 3, 2024 at 5:26 AM Koichi Murase  wrote:

>
>
> By the name "import", I expect also something like an include guard,
> i.e., it loads the same file only once at most. I have a
> shell-function implementation,`ble-import', in my framework. It
> doesn't support namespace, but it supports the include guard. It also
> accepts a setting for the paths to search. It also has a feature like
> `with-eval-after-load' of elisp (or the "onload" event).
>
> --
> Koichi
>
>
I don't think we need yet another builtin named 'import' for all the reason
mentioned in the mail thread.
Setting BASH_IMPORT_PATH in an .rc file is not less confusing than simply
setting PATH in the same .rc file. In the case one mix'n'match dir setup in
both PATH and BASH_IMPORT_PATH, and in case of mix'n'match -x more setting
of a .sh files, then all sort of documentation must be done to explicitly
tells which case takes precedence etc... I prefere by far to handle $PATH
myself in my .rc and place myself my 'shell packages' in my hierarchies.

Note that a package (an import) generally is non executable, as some
mentioned, yet beeing able to 'run' a package can be interesting in some
case, for instance a pacakge run can display its embedded documentation,
useful during development (yet debatable)

My package management have the same feature as @Koichi one, i.e a C like
#pragma once.

I add a 'force' option to my 'import', this is for interactive 'import', to
bring more functions (global, alias), yet during bring-up one may fix a
package an force a reload.

Some packages, on the other hand, are to be used 'imported' by other
packages and as such I have versioning mechanism, so 'import' can tell
about new version available from the git repos, or even an auto update from
git (when suitable) etc... and indeed it is possible for the import to tell
which minimal version is needed for the 'import'.

So doing an 'import' feature, may be to be on par with python/perl/nameit,
require more than just a new ENV var and the hack to use it.

Cheers


Re: Docco

2024-03-27 Thread Phi Debian
Interestingly, the ksh docco say that 'Conditional Expressions' applies to
[[ only :-) and then say the -a is obsolete.

test expression
later says
  the -a and -o binary oper-
  ators can be used, but they are fraught  with  pitfalls  due
 to
  grammatical ambiguities and therefore deprecated in favor of
in-
  voking separate test commands

Ok let's forget about this...


Re: Docco

2024-03-27 Thread Phi Debian
On Wed, Mar 27, 2024 at 10:28 AM Andreas Kähäri 
wrote:

> On Wed, Mar 27, 2024 at 10:00:06AM +0100, Phi Debian wrote:
> > $ man bash
>
> Would it not be untrue to say that "-a" is specific to "[[", as it is
> clearly not the case?  The fact that it is easy to confuse the two is
> a different matter, but the documentation is correct for the current
> implementation (which mimics the ksh shell with regards to the unary
> "-a" operator).
>
> --
> Andreas (Kusalananda) Kähäri
> Uppsala, Sweden
>
> .

Ok may be my wording is not correct, but yet it require a good reading
compile to get it right, first read all about [[ that is at the top of the
man (at 5%), then get the 'CONDITIONAL EXPRESSIONS' distingo between [[ vs
[ (at 40%) and finally get to 'test expr' (at 92%) to discover the whole
thing about -a vs -e (same for other options), a little heads up when quick
reading directly at 'CONDITIONAL EXPRESSIONS' would not hurt and not
jeopardise the docco semantic I guess?


Docco

2024-03-27 Thread Phi Debian
$ man bash
...
CONDITIONAL EXPRESSIONS
...

   -a file
  True if file exists.
   -e file
  True if file exists.
...

'May be' would be nice for newbies to precise which options are [ specific
vs [[ specific for instance

   -a file
  True if file exists ([[ only, for [ see test builtin)

This to avoid things like

$ [   -a /tmp ] && echo ok || echo nok
ok
$ [ ! -a /tmp ] && echo ok || echo nok
ok

I know it is obvious, unless this is intended to force a complete
multi-pass man read...


Re: unreliable value of LINENO variable

2023-12-04 Thread Phi Debian
Confirmed on Ubuntu 22.04.3 LTS
--

$ cat ~/yo.sh
echo $BASH_VERSION
echo 0 $LINENO
if ((1)); then
   ( : ) | : ; echo 1 $LINENO
fi
echo 2 $LINENO

$ for s in bash ./bash ksh zsh
> do printf "\n$s\n" ; $s ~/yo.sh
> done

bash
5.1.16(1)-release
0 2
1 4
2 5

./bash
5.2.21(5)-release
0 2
1 4
2 5

ksh

0 2
1 4
2 6

zsh

0 2
1 4
2 6



Re: BUG: Colorize background of whitespace

2023-10-26 Thread Phi Debian
On Wed, Oct 25, 2023 at 5:01 PM Greg Wooledge  wrote:

>
> Ahh.  That wasn't clear to me.  Thanks.
>
>
Ouch got caught the same way. This can be reduced to

$ clear
$ echo "\e[36;44;4m\nsome colored\ttext with\ttabs\e[m\n"
$  # Recall and run prev command
repeat the later until top lines scroll out.

I added ;4 in the first \e i.e Underline to show that doing  output is
just a cursor movement (not a char painting) as would do any other cursor
positioning esc seq.

Doing so we see that on  both the color and the underline are not
decorated, on top lines, this is normal

When the scroll out occurs, the underline is never painted, as expected,
yet the BG color of the 'tab' is decorated.

This is because there is a bogus \n right after the SGR sequence
\e[36;44;4m\n this later \n say we open and SGR attribute, and \n will
preserve it on scroll out, BUT colors at not completly SGR as stated here

https://invisible-island.net/xterm/ctlseqs/ctlseqs.html search for
*"ANSI X3.64-1979"*


*When we remove the bogus \n all is normal*

$ clear
$ echo "\e[36;44;4m\some colored\ttext with\ttabs\e[m\n"

many time.


Generally speaking it is not good to leave an open SGR sequence before a
\n, when emitting SGR before the \n it is wise to have a closing sequence
\n[m\n


Re: BUG: Colorize background of whitespace

2023-10-25 Thread Phi Debian
On Wed, Oct 25, 2023 at 11:09 AM Holger Klene  wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS: -g -O2 -flto=auto -ffat-lto-objects -flto=auto
> -ffat-lto-objects -fstack-protector-strong -Wformat -Werror=format-security
> -Wall
> uname output: Linux BX-NB-015 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri
> Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
> Bash Version: 5.1
> Patch Level: 16
> Release Status: release
>
> Description:
> The initial bash background is hardcoded to some default (e.g. black) and
> cannot be colorized by printing "transparent" tabs/newlines with
> ANSI-ESC-codes.
> Only after a vertical scrollbar appears, the whitespace beyond the window
> hight will get the proper background color.
>
> Repeat-By:
> run the following command line:
> clear; seq 50; printf '\e[36;44m\nsome colored\ttext with\ttabs\e[m\n'
> Play with the parameter to seq, to keep the line within the first screen
> or move it offscreen.
>
> Reproduced in:
> - in Konsole on Kubuntu 23.04
> - in the git bash for windows mintty 3.6.1
> - in WSL cmd window on Windows 11
>
>
I guess this is the way terminal emulator works, and I guess as well they
are doing what real terminals use to do, though I have no such term at hand
to prove :-)

Worth to consider
 $ clear; seq 50; printf '\e[36;44m\nsome colored\ttext with\ttabs\e[m\n' |
expand


Re: error message lacks useful debugging information

2023-10-05 Thread Phi Debian
On Thu, Oct 5, 2023 at 12:57 PM Greg Wooledge  wrote:

> On Thu, Oct 05, 2023 at 07:04:26AM +0200, Phi Debian wrote:
> > Since we are on the error path (not the perf path) may be the shell could
> > go the extra miles and try some more diag, has it does for shebang, on
> > ENOENT, the shell could try to open the a.out, if OK try some other
> > euristics, [...]
>
> Just for the record, trying to open the program for reading is only
> possible for scripts.  It's not too uncommon for a compiled program
> to have +x permission for the user, but *not* +r permission.  Such a
> program can be executed, but not opened for reading.
>
>
Correct, this is why I proposed the later case, when the shell can not find
out more to say, just do nothing and simply  suggest some known cases of
trouble, yet if the shell can provide more insight, run file(1) and/or an
ldd(1) could be usefull. This is the counter part for an interpreter of
what compiler do, long time ago, diag from various compiler where very
limited, and nowaday they try to be as accurate as possible.


Re: error message lacks useful debugging information

2023-10-04 Thread Phi Debian
Since we are on the error path (not the perf path) may be the shell could
go the extra miles and try some more diag, has it does for shebang, on
ENOENT, the shell could try to open the a.out, if OK try some other
euristics, at least the trivial one i.e the multilib case that seems the
most disorientating one, the wrong arch (arm on intel) is already correctly
handled, then 'may be' try an ldd (if ldd exist) to mention possible
missing libs, all this leading to yet another 'error message' to be entered
in the NLS...


Or simply a reword of the current implementation
bash: ./Candle: cannot execute: required file not found
bash: ./Candle: cannot execute: Possible arch mismatch or missing shared
libs

Assuming no QA test already match the "required file not found", if so it
would require yet another compat flag...
With this more explicit error message the OP at least have some clue to of
what to look for.


Re: bug#65659: RFC: changing printf(1) behavior on %b

2023-09-01 Thread Phi Debian
On Fri, Sep 1, 2023 at 8:10 PM Stephane Chazelas 
wrote:

> 2023-09-01 07:54:02 -0500, Eric Blake via austin-group-l at The Open Group:
>
>
> FWIW, a "printf %b" github shell code search returns ~ 29k
> entries
> (
> https://github.com/search?q=printf+%25b+language%3AShell=code=Shell
> )
>
>
Ha super, at least some numbers :-), I didn't knew we could make this kind
of request... thanx for that.


Re: RFC: changing printf(1) behavior on %b

2023-08-31 Thread Phi Debian
Well after reading yet another thread regarding libc_printf() I got to
admit that even %B is crossed out, (Yet already choosen by ksh93)

The other thread also speak about libc_printf() documentting %# as
undefined for things other than  a, A, e, E, f, F, g, and G, yet the same
thread also talk about a A comming late (citing C99) in the dance, meaning
what is undefined today become defined tomorow, so %#b is no safer.

My guess is that printf(1) is now doomed to follow its route, keep its old
format exception, and then may be implement something like c_printf like
printf but the format string follow libc semantic, or may be a -C option to
printf(1)...

Well in all case %b can not change semantic in the bash script, since it is
there for so long, even if it depart from python, perl, libc, it is
unfortunate but that's the way it is, nobody want a semantic change, and on
next routers update, see the all internet falling appart :-)


Re: RFC: changing printf(1) behavior on %b

2023-08-31 Thread Phi Debian
On Thu, Aug 31, 2023 at 9:11 PM Chet Ramey  wrote:

>
> I doubt I'd ever remove %b, even in posix mode -- it's already been there
> for 25 years.
>
>
> Neither one is a very good choice, but `#' is the better one. It at least
> has a passing resemblence to the desired functionality.
>
> Why not standardize another character, like %B? I suppose I'll have to look
> at the etherpad for the discussion. I think that came up on the mailing
> list, but I can't remember the details.
>
>
Glad I red this thread before replying to the other one dealing with the
same issue.

I once worked on an issue on ksh93 regarding printf discrepency vs libc
printf, and got replied that "ksh is not C". I Think we got to admit that
shell's printf have departed from libc since long and now if a feature in
libc appears and collide with printf(1) then we got to get yet another %
exception char. In bash docco I see %b %q and %(datefmt...), so for a new
feature we should get something that we think libc as little chance to
target.

My vote is for posix_printf %B mapping to libc_printf %b, with the idea
that libc has little chance to have %B meaning UPPERCASE BINARY :-),  as %x
%X do.

And yet one more line in the docco explaining this divergence.


Re: formatting man pages in email (was: Assignment to RO variable)

2023-08-16 Thread Phi Debian
I find it usefull and keep it :-) Thanx tons.

  readonly [-aAf] [-p] [name[=word] ...]
>   The  given  names are marked readonly; the values of these
> names
>   may not be changed by subsequent assignment.  If the  -f
>  option
>   is  supplied,  the  functions  corresponding to the names
> are so
>   marked.  The -a option restricts the variables  to  indexed
>  ar‐
>   rays;  the  -A option restricts the variables to associative
> ar‐
>   rays.
>



On Wed, Aug 16, 2023 at 8:06 AM G. Branden Robinson <
g.branden.robin...@gmail.com> wrote:

> At 2023-08-15T23:24:31-0500, Dennis Williamson wrote:
> > From man bash:
> >
> > readonly [-aAf] [-p] [name[=word] ...]
> >   The given names are marked readonly; the values of these
> > names may not be changed by subsequent assignment.  If the -f option is
> > supplied, the functions
> >   corresponding to the names are so marked.  The -a option
>
> That man page quotation came out somewhat awkwardly.
>
> I often find myself quoting man pages in email, so I have a shell
> function for this scenario.  I call it "mailman", fully aware that
> that's also the name of mailing list manager.  (Even if I ran it, I
> wouldn't ever do so at an interactive shell prompt, because it's a
> daemon.)
>
> mailman () {
> local cmd= opts=
> case "$1" in
> (-*)
> opts="$opts $1"
> shift
> ;;
> esac
>
> set -- $(man -w "$@")
> cmd=$(zcat --force "$@" | \
> grog -Tutf8 -b -ww -P -cbou -rU0 -rLL=72n -rHY=0 -dAD=l)
> zcat --force "$@" | $cmd | less
> }
>
> This relies upon man-db man(1)'s `-w` option to locate the requested
> pages (and it does the right thing if you specify file names, not just
> page topics).
>
> It also uses grog(1), much improved in groff 1.23.0 (released 5 July),
> to figure out which preprocessor(s) and macro package the document
> needs.
>
> I'll walk through those groff options.
>
> * `-Tutf8` formats for a UTF-8 terminal.
> * `-P -cbou` passes options to grotty(1) to turn off all ISO
>   6429/ECMA-48 escape sequences, _and_ all overstriking sequences; their
>   formatting effects won't come through in plain text email anyway.
> * `-rHY=0` turns off hyphenation.
> * `-rLL=72n` sets the line length to 72 ens (character cells), which
>   helps prevent ugly line wrapping.
>
> Two options are new groff 1.23 features.
>
> * `-rU0` turns off hyperlink support, so that any URIs in the man page
>   will be formatted as text.  This is a new groff 1.23.0 feature.
> * `-dAD=l` turns off adjustment (the spreading of output lines).
>
> Two more options are ones I use, but maybe only maintainers of man pages
> want them.
>
> * `-b` produces backtraces for any formatter warnings or errors.
> * `-ww` turns on all formatter warnings.
>
> I hope people find this useful.
>


Re: problem anomalies , possibly aliases related

2023-07-21 Thread Phi Debian
On Thu, Jul 20, 2023 at 3:28 PM Greg Wooledge  wrote:

>
> The idea that this would "work" is quite surprising to me.  The basic
> idea of a function is that it does stuff and then returns you to the
> point where you were when the function was called.
>
>
Really ?


> In other languages, would you expect that you might call a function,
> and have that function reach upward through the call stack and manipulate
> your control flow?


To name a few :-)
In C longjmp() lands you anywhere on the the call path.
Any try/catch language will do it too

In C family there are even function that never return (see noreturn
function attribute)

FORTRAN alternate return is fun too.

Not counting assembly where you can land anywhere with a ret *reg

Basically this is hacker day to day job to return anywhere unexpected :-)


Re: [PATCH] sleep builtin signal handling

2023-06-30 Thread Phi Debian
Well

┌─none:/home/phi
└─PW$ type sleep
sleep is hashed (/usr/bin/sleep)

So sleep is not a builtin here.


Re: [PATCH] sleep builtin signal handling

2023-06-30 Thread Phi Debian
Strange, on BASH_VERSION='5.1.16(1)-release' sleep don't  get interupted by
SIGWINCH

┌─none:/home/phi
└─PW$ trap 'echo $LINES.$COLUMNS' SIGWINCH
┌─none:/home/phi # Resizing window here
└─PW$ 79.80
└─PW$ 78.80
└─PW$ 77.80
└─PW$ 76.80
└─PW$ 75.80
^C
┌─none:/home/phi
└─PW$ sleep  # Resizing window here, sleep is not interupted
^C
┌─none:/home/phi
└─PW$


Re: `wait -n` returns 127 when it shouldn't

2023-05-17 Thread Phi Debian
On Wed, May 17, 2023 at 12:21 PM Oğuz İsmail Uysal <
oguzismailuy...@gmail.com> wrote:

>
> This boils down to the following
>
>  true &
>  false &
>  wait -n
>
> There is no guarantee that `wait -n' will report the status of `true',
> the shell may acquire the status of `false' first. It's not a bug
>

Ok for the randomness of result yet the $? should be 0 or 1 never 127 as
the OP asked ? did I miss something?


Re: bash core dumps doing glob pattern on long string

2022-10-11 Thread Phi Debian
On Tue, Oct 11, 2022 at 9:02 AM Martin D Kealey 
wrote:

> Broadly I meant translating into a "preferably" Deterministic
> (stackless/non-backtracking) Finite State Automaton.
>
> However I note that it's not always possible to make a Deterministic FSA
> when you have repeatable groups which themselves don't have fixed lengths
> (like a+(a|abbba|aabb*aba)b); either the extglob compiler would need to
> start over and compile to a Non Deterministic (stacking) FSA, or just give
> up and go back to the recursive approach.
>
> Personally I would favour the addition of «shopt -s dfa_extglob» that
> would block the fall-back, causing extglobs that would need a stack to be
> treated as magic match-never tokens.
>
> I say "extglob", but this would also speed up silly ordinary globs like
> [a-z]*[a-z]*[a-z]*[a-z]*[a-z]*[a-z]*[a-z]*[a-z]*[a-z]*[a-z]*[a-z]*[a-z]*[a-z]
>
> -Martin
>
>
Ok I see, that's true that when you see a glob pattern like +(a) one would
think an optimiser could make it backtrack less.

«shopt -s dfa_extglob» could be an option, Chet may say it is hard to
explain when to use it... but we can go this path too, that is less of a
hack regarding stack provision, but more of a hack regarding the extglob
compiler :) Any implementation will please me :)


Re: bash core dumps doing glob pattern on long string

2022-10-11 Thread Phi Debian
iner ==> infer sorry about typo's


Re: bash core dumps doing glob pattern on long string

2022-10-11 Thread Phi Debian
Hum, may be I was over optimistic regarding alloca(), it seems it core
dumps as well :-), i.e never return NULL.

I was mentioning alloca() because I use to do my own stacksize check using
getrlimit() and getting the max stack size, and measuring the distance
between the stack_base and the recursion stack level to iner how much stack
remains, I didn't wanted to talk about that but seems it is the last
resort...

Well not a simple case to handle.


Re: bash core dumps doing glob pattern on long string

2022-10-10 Thread Phi Debian
On Tue, Oct 11, 2022 at 7:23 AM Martin D Kealey 
wrote:

>
> 4) compile globs into state machines, the same way that regexes get
> compiled, so that they can be matched without needing any recursion.
>
> -Martin
>

Do you mean the NFA colud trade recursion for malloc()'d backtracking point
record? this would trade stacksize vs bss size. Or may be you mean
converting the NFA into DFA, that I could not handle myself :)

That's why at first I thought limiting the input string length was may be
an option, but Chet demonstrated it was not.

Then returning an error instead of core dumps on recursion too deep on the
NFA walk seems to be the thing to do, and then I thought a stack
availabitly check for a given recursive level could be done, this is pretty
easy to do and almost costless regarding perf, specially because bash (and
other shells) have access to alloca(), well we could admit that os/arch
sans alloca() could still core dump.

A stack check with alloca() ressemble something along this lines

int stack_too_small(size_t s)
{ return alloca(s)!=NULL;
}

This non inlinable function do allocate the stack provision, return true if
not possible, false if possible and return the provision to the stack, the
caller is then assured there is enough space in the stack for its recursion
frame.

and the shell code would look like
... recursive_func(...)
{...
  if(stack_too_small(recursion_frame_size)
  { handle nicely; // longjmp, return
}

What Chet call the right size to define is possibly something like the
recursion frame size, that is the stack needed by the current recursion
level instance (its locals) + all the frame size sub function would need
until the next recurence, i.e if the call path look like

a()-->b()-->c()-->a()...
The recursion frame size  is sum(framesize(a)+framesize(b)+framesize(c))

Some empiric constant can also be used, for instance on linux a max
stacksize of 0x80 is pretty common deciding we big constant like
32K could be simple and good enough.


Re: bash core dumps doing glob pattern on long string

2022-10-10 Thread Phi Debian
On Mon, Oct 10, 2022 at 9:08 PM Chet Ramey  wrote:

> That's not the point. The point is that shell pattern matching is used in
> contexts other than filename globbing, and the shell shouldn't broadly
> impose a filename-only restriction on those uses.
>
> Chet
>

Ok got it, I was confused because the manual occurences of the term GLOB in
bash doc seems to appears only in the  "Pathname Expansion" paragraph,
which point to a sub-paragraph called  "Pattern Matching" so it looked to
me like if "Pattern Matching" applied to "Pathname Expansion"

Inside "Pattern Matching" we see ref to things like
glob-complete-word (glob-* :-) ) all apply to
"pattern for pathname expansion"

extglob If set, the extended pattern matching features described
  above under Pathname Expansion are enabled.
 nocaseglob
  If set, bash matches  filename...

So at the point I thought that globbing was kind of related to the max file
path of an OS.

On the contrary I see nor reference saying the 'pattern matching for
pathname expansion can also be used by extension to any strings, but now I
understand it is.

Yet let me know the strategy or limit you choose, and I will do the same
for ksh93

Cheers,


Re: bash core dumps doing glob pattern on long string

2022-10-10 Thread Phi Debian
Ok, I  agry that PATH_MAX  should not be considered, because not defined
every where (OS's)

If you decide a limit let me know I would use the same for ksh93 :)
for the time being we core dump :)

Cheers.


Re: bash core dumps doing glob pattern on long string

2022-10-09 Thread Phi Debian
@Oğuz A simple look at the core dump will suffice to convince you that the
stack has overflowed. As Koichi stated, both ksh and bash do implement de
'simple' recursive approach.

My question was only about the strategy that bash would follow, to converge
with bash, i.e we can still both core dump, and if we decide not to then
implement a common behavior.


bash core dumps doing glob pattern on long string

2022-10-09 Thread Phi Debian
I was looking at a bug on ksh93 that is
"core dumps doing glob pattern on long string" and it happen that bash
suffer the same.

$ [[ $(printf '%010d' 0) == +(0) ]]

I see 3 way of fixing this

1)  [[ string == pattern ]] is for glob pattern, so string should be
limited to PATH_MAX, so an upfront string length on string could prevent to
call the glob pattern recursive functions, and then avoid the core dump.

2) Since some may have abused the glob pattern with long string bigger then
PATH_MAX but smaller than core dump, imposing a PATH_MAX limit may break
some wrong scripts, so instead we could have a fix recursion deep level, as
we do have for shell functions calling,  this hopefully should allow  wrong
doing script with abused string length to continue to run, yet avoiding
core dump when reaching the limit, i.e break the call path.

3) Implement a stack deep check in the recursion, when getting close to the
end of stack break the function trail (like function too deep for recursive
functions).

A last possibility is 'do nothing', since most of the people/scripts that
works, don't care. Yet, the core dump limit is not the same on between bash
and ksh93 then making porting hazardous. If the bash team plan to fix it,
I'd like to know which way, so we could make ksh93 behave the same, for 1)
and 2) it would be the exact same limit, for 3) it would be depending on
stack usage, would not be the same exact string length that would break,
but it would breaks instead of core dumping.

Cheers,


Re: simple prob?

2021-07-02 Thread Phi Debian
Ha ok, I got it, I was focused on our interactive session. Surely a script
running on a UID behalf would be ill-advised to interpret (in the sense of
shell evaluation) an arbitrary shell expression.

On Fri, Jul 2, 2021 at 6:06 PM Greg Wooledge  wrote:

> On Fri, Jul 02, 2021 at 05:45:23PM +0200, Phi Debian wrote:
> > Regarding the code injection I am not sure I got it.
> >
> > If you are sitting at a prompt, why would you trick
> >
> > unicorn:~$ njobs_ref 'x[0$(date>&2)]'
> >
> > when you could simply type
> > unicorn:~$ date
> >
> > I assume protected script/source (the ones you can't write into), are
> wise
> > enough not to run command based on user input, in short I guess no
> > protected script are doing thing like read in; eval $in :) that is the
> > simplest code injection :) and then would never let you have a chance to
> > enter 'x[0$(date>&2)]' at any time.
>
> For functions that you've written exclusively for personal use, it's
> not an immediate concern.  It's more of a thing that you want to be
> aware of for the future.
>
> Where it becomes important is when you're writing scripts for other
> people to use, or which run as different user accounts, or with
> different privileges.
>
> The classic example of this is a script that's run by a web server in a
> CGI environment, which accepts query parameters from the end user.  If
> one of those query parameters is used in an unsafe way, it can execute
> undesired commands on the web server.
>
> Of course, there are *many* other places that shell scripts are used,
> such as booting an operating system, starting various services, and
> so on.  In some of these cases, there is no external input being read,
> or the external inputs are "trusted" files owned and edited only by
> the system admin (root).  But in other cases, untrusted input may be
> read.
>
> So, there's merit in adopting a proactive strategy to shell script
> security.  Maintaining a slightly paranoid mindset can help you spot
> potential security holes and possibly avoid disasters.
>
>


Re: simple prob?

2021-07-02 Thread Phi Debian
Ha yes I lost sight of <(jobs -t), I think it is a good improvement to the
challenge :)

Regarding the local t, you right, I missed to precise that any function
that do output parameters via nameref should namespace all their locals and
check the namespace that is indeed a bit combersome, that's why I tend to
stay away from that :) yet it was asked..

Here I typed to fast (challenged by providing the shortest answer :) ) and
throwed  away my basics principles :)

Regarding the code injection I am not sure I got it.

If you are sitting at a prompt, why would you trick

unicorn:~$ njobs_ref 'x[0$(date>&2)]'

when you could simply type
unicorn:~$ date

I assume protected script/source (the ones you can't write into), are wise
enough not to run command based on user input, in short I guess no
protected script are doing thing like read in; eval $in :) that is the
simplest code injection :) and then would never let you have a chance to
enter 'x[0$(date>&2)]' at any time.

In all case since doing output parameter require some kind of name spacing
check it would reject input of the form 'x[0$(date>&2)]'

I guess a typical output parameter function should ressemble something like
this

function foo
{ [[ ! "$1" =~ ^[a-zA-Z_][a-zA-Z0-9_]*$ ]]   &&
  echo "Invalid parameter name '$1'" >&2 &&
  return 1
  [ "${1:0:4}" = "foo_" ] && echo "Namespace collision '$1' " >&2 &&
  return 1

  typeset -n foo_out="$1"
  foo_out="value"
}

This one reject the bad 'x[0$(date>&2)]' only accept scalar variable names
as output parameter and reject scalar in the foo() namespace.



On Fri, Jul 2, 2021 at 2:01 PM Greg Wooledge  wrote:

> On Fri, Jul 02, 2021 at 09:09:34AM +0200, Phi Debian wrote:
> > PW$ function njobs
> > > { [ "$1" != "n" ] && typeset -n n="$1"
> > >typeset -a t ; readarray t <<<"$(jobs)" ;n=${#t[@]}
> > > }
>
> <<<$() is a poor imitation of < <() which is what you really want.
>
> readarray -t t < <(jobs); n=${#t[@]}
>
> Combining <<< and $() only gives an approximation of the same result
> (the command substitution strips off all trailing newline characters,
> and then the here-string syntax appends one newline), and it's less
> efficient, because it involves slurping the entire input into memory,
> then writing it out to a temporary file, then opening the temporary
> file for input and unlinking it, then reading it back in a second time.
>
> Using < <() avoids the newline alterations and the temporary file.
>
> (Note: in some sufficiently new versions of bash, there may not be a
> temporary file in some cases.  But it's still not the best solution,
> as it still involves storing and reading the whole output multiple times.)
>
>
> Stepping back a moment, you're using the name reference version of the
> "pass an output variable by reference" strategy.  This requires bash 4.3,
> which is reasonable, and it requires some additional sanity checks which
> you did not show.
>
> What's really interesting to me is that you did a *partial* sanity check,
> refusing to create a circular name reference if the user passed "n"
> as their output variable name.  But you forgot to check for "t", which is
> another local variable you're using.  Also, in Linda's original example,
> the output variable was literally named "n", so choosing that as your
> name reference and explicitly disallowing it is a really spiteful choice.
>
> Finally, you didn't do any sanity checking of the output variable name
> (beyond comparing it to one of your two local variable names), so your
> function is susceptible to the same code injection attacks we discussed
> earlier in the thread.
>
> unicorn:~$ njobs_ref() { typeset -n n="$1"; n=42; }
> unicorn:~$ njobs_ref 'x[0$(date>&2)]'
> Fri Jul  2 07:54:49 EDT 2021
>
> As I mentioned a few days ago, all variants of the "pass a variable name
> by reference" method are basically equivalent to each other, and all
> of them need input sanity checking in order to avoid code injections.
> (Some of the variants avoid *some* flavors of code injection, but none of
> them avoid this one.)
>
>


Re: simple prob?

2021-07-02 Thread Phi Debian
On Fri, Jul 2, 2021 at 11:15 AM Alex fxmbsw7 Ratchev 
wrote:

> good debugging, yea i missed the n # count char, i type stupidly from a
> cell foun, ..
> and yea leftover arr ekements exceots when u just use the first one as n
> i just wanted to show shorter code
>

Yes I like those little one liners when other goes long way with multi
fork/exec etc  :), just fun challenges :)


Re: simple prob?

2021-07-02 Thread Phi Debian
On Fri, Jul 2, 2021 at 9:24 AM Alex fxmbsw7 Ratchev 
wrote:

> jobsn=( $( jobs -p ) ) jobsn=${jobsn[@]}
>

This give
PW$ jobsn=( $( jobs -p ) ) jobsn=${jobsn[@]}
PW$ echo $jobsn
3644 3645 3646 3647

I guess you meant  jobsn=${#jobsn[@]}
 ^ You missed the '#'

Yet there are some left over
PW$ jobsn=( $( jobs -p ) ) jobsn=${#jobsn[@]}
PW$ echo ${jobsn[@]}
4 3645 3646 3647

So it is not clean, and the OP want a function with a named output arg.

Yet to elaborate on your technic may be this one is a little cleaner

PW$ jobsn=$(printf '%c' $(jobs -p)) jobsn=${#jobsn} ; echo $jobsn
4


Re: simple prob?

2021-07-02 Thread Phi Debian
On Tue, Jun 29, 2021 at 10:23 PM L A Walsh  wrote:

> I hope a basic question isn't too offtopic.
> Say I have some number of jobs running:
>
> >  jobs|wc -l
> 3
> ---
>
> Would like to pass a varname to njobs to store the answer in, like:
>
> So I can run:
>
> >  njobs n
> echo "$n"
> 3
>
>
This a a double question 'how to', and I see no bash bugs here.

The 2 questions are
- how do I pass a variable name as an output argument to a function ('n' in
your 'jobs n' example)
- how to set a variable in a sub command? that is no doable, a sub command
can't return variable to its parent, so you obviously have to do things
diffrently.

A simple 2 liners, solve all this, with no code injection blah


PW$ jobs
[1]   Running sleep 111 &
[2]   Running sleep 111 &
[3]-  Running sleep 111 &
[4]+  Running sleep 111 &

PW$

PW$ function njobs
> { [ "$1" != "n" ] && typeset -n n="$1"
>typeset -a t ; readarray t <<<"$(jobs)" ;n=${#t[@]}
> }

PW$ njobs n ; echo $n
4

# explanations (you may skip here)
#===
[ "$1" != "n" ] && typeset -n n="$1"
This make sure the given output variable name is a valid SHELL identifier,
providing anything not valid in "$1" will break there.
This also enforce that the given $1 output variable name doesn't match our
own local nameref name, if it match we don't do our nameref, and re-use the
upper scope output variable name, that by definition is a valid variable
name if we got that far.

typeset -a t
define a local array that we will fill, being local mean the booking is
done at function return

readarray t <<<"$(jobs)" ;
Fill the array with your command you want to count lines for.

n=${#t[@]}
Fill the output variable

All is safe, all is clean, no 'apparent' temp file, no sub command :)

Shell programing is fun :)


Re: bash completion after a multiline string

2021-07-02 Thread Phi Debian
Thanx Chet for taking time to explain.
May be readline API should have a way to know that a quote ['"`] is opened
ine many previous lines and the first occurence of such quote ine the
current one , is a closing one.

Well I guess this is too much of a trouble for something living this way
for such a long time :)

Cheers,


Re: function def crash

2021-06-24 Thread Phi Debian
Yes figured the hard way :)
Thanx

On Thu, Jun 24, 2021 at 1:51 PM Greg Wooledge  wrote:

> On Thu, Jun 24, 2021 at 09:48:59AM +0200, Phi Debian wrote:
> > $ function echo { \echo the function version; \echo "$@"; }
>
> For the record, this isn't how you write a function that wraps a builtin.
> Use the "builtin" command instead.
>
> echo() {
>   printf 'echo() wrapper invoked.\n'
>   builtin echo "$@"
> }
>
> Or, if you aren't sure whether the command you're wrapping is a builtin
> or a program, use "command":
>
> echo() {
>   printf 'echo() wrapper invoked.\n'
>   command echo "$@"
> }
>
> The backslash hack only stops aliases from being used.  It doesn't stop
> your function from calling itself recursively.
>
>


Re: function def crash

2021-06-24 Thread Phi Debian
Arghh! I knew it was a pilot error :)

unset PROMPT_COMMAND fix it :)
my PROMPT_COMMAND  reference a function that do 'echo'  and I should use
"command echo" in the echo function not \echo.




On Thu, Jun 24, 2021 at 10:51 AM Phi Debian  wrote:

> I discovered that
>
> $ type -ta word
>
> Would tell me what I wanted i.e all the defs of word :)
>
>
>
> On Thu, Jun 24, 2021 at 9:48 AM Phi Debian  wrote:
>
>> Hi All,
>>
>> Dunno if this is a bug or pilot error, I bumped into this.
>>
>>
>> $ type echo
>> echo is a shell builtin
>>
>> $ function echo { \echo the function version; \echo "$@"; }
>>
>> After this I got different behavior on 'same' OS version, same BASH
>> version, meaning my environment must interact but can't tell to what extend
>> at the moment.
>>
>> On 2 systems I got bash 'crash', probaly in recursion runaway, during the
>> function definition.
>> On 1 system the definition is OK, then the run for the function crash,
>> again on what's look a recursion run away.
>>
>> I tried this tiny test case on debian 4.19.98-1 (2020-01-26) (little old)
>> and GNU bash, version 5.0.3(1)-release (i686-pc-linux-gnu) (little old too)
>> and I got the definition crash behavior.
>>
>> I bumped into this as I tried to implement something that would tell me
>> all the 'command' definition of a word0 of a shell command, so having the
>> same identifier for a program basename, an alias and a function, I wanted
>> to make something (a function?) that would tell me
>>
>> $ word0 echo
>> echo is an alias
>> echo is a function
>> echo is a program (a.out...)
>>
>>
>> I choosed randomly 'echo' as a test case, I could have used ls, etc
>> only echo seems to choke.
>>
>> Cheers,
>> Phi
>>
>


Re: function def crash

2021-06-24 Thread Phi Debian
I discovered that

$ type -ta word

Would tell me what I wanted i.e all the defs of word :)



On Thu, Jun 24, 2021 at 9:48 AM Phi Debian  wrote:

> Hi All,
>
> Dunno if this is a bug or pilot error, I bumped into this.
>
>
> $ type echo
> echo is a shell builtin
>
> $ function echo { \echo the function version; \echo "$@"; }
>
> After this I got different behavior on 'same' OS version, same BASH
> version, meaning my environment must interact but can't tell to what extend
> at the moment.
>
> On 2 systems I got bash 'crash', probaly in recursion runaway, during the
> function definition.
> On 1 system the definition is OK, then the run for the function crash,
> again on what's look a recursion run away.
>
> I tried this tiny test case on debian 4.19.98-1 (2020-01-26) (little old)
> and GNU bash, version 5.0.3(1)-release (i686-pc-linux-gnu) (little old too)
> and I got the definition crash behavior.
>
> I bumped into this as I tried to implement something that would tell me
> all the 'command' definition of a word0 of a shell command, so having the
> same identifier for a program basename, an alias and a function, I wanted
> to make something (a function?) that would tell me
>
> $ word0 echo
> echo is an alias
> echo is a function
> echo is a program (a.out...)
>
>
> I choosed randomly 'echo' as a test case, I could have used ls, etc
> only echo seems to choke.
>
> Cheers,
> Phi
>


function def crash

2021-06-24 Thread Phi Debian
Hi All,

Dunno if this is a bug or pilot error, I bumped into this.


$ type echo
echo is a shell builtin

$ function echo { \echo the function version; \echo "$@"; }

After this I got different behavior on 'same' OS version, same BASH
version, meaning my environment must interact but can't tell to what extend
at the moment.

On 2 systems I got bash 'crash', probaly in recursion runaway, during the
function definition.
On 1 system the definition is OK, then the run for the function crash,
again on what's look a recursion run away.

I tried this tiny test case on debian 4.19.98-1 (2020-01-26) (little old)
and GNU bash, version 5.0.3(1)-release (i686-pc-linux-gnu) (little old too)
and I got the definition crash behavior.

I bumped into this as I tried to implement something that would tell me all
the 'command' definition of a word0 of a shell command, so having the same
identifier for a program basename, an alias and a function, I wanted to
make something (a function?) that would tell me

$ word0 echo
echo is an alias
echo is a function
echo is a program (a.out...)


I choosed randomly 'echo' as a test case, I could have used ls, etc
only echo seems to choke.

Cheers,
Phi


Re: Feature request: index of index

2021-06-22 Thread Phi Debian
Meanwhile you may use functions to setup your variables something along
those lines.

PW$ function first-index { echo $1; }
PW$ function last-index { shift $(($#-1)) ; echo $1; }

PW$ declare -a array=([5]="hello" [11]="world" [42]="here")
PW$ declare -i first_index=$(first-index ${!array[@]})
PW$ declare -i last_index=$(last-index ${!array[@]})

PW$ echo $first_index $last_index
5 42

On Thu, May 6, 2021 at 1:24 PM Léa Gris  wrote:

> Currently, to know the index of an array's indexes (example: first or
> last index), it needs an intermediary index array:
>
> > #!/usr/bin/env bash
> >
> > declare -a array=([5]=hello [11]=world [42]=here)
> > declare -ai indexes=("${!array[@]}")
> > declare -i first_index=${indexes[*]:0:1}
> > declare -i last_index=${indexes[*]: -1:1}
> > declare -p array indexes first_index last_index
>
> Which prints:
> > declare -a array=([5]="hello" [11]="world" [42]="here")
> > declare -ai indexes=([0]="5" [1]="11" [2]="42")
> > declare -i first_index="5"
> > declare -i last_index="42"
>
> It would be convenient to be able to index directly with this syntax:
>
> declare -i first_index=${!array[@]:0:1}
> declare -i last_index=${!array{@}: -1:1}
>
>
> --
> Léa Gris
>
>
>


Re: bash completion after a multiline string

2021-06-22 Thread Phi Debian
May be posting a link is not appropriate so I cut/paste it here

I bumped into this problem regarding bash completion, can't find reference
to it.

When doing

$ echo foo "bar" /tm

I got /tm expanded to /tmp/ that is indeed correct.

But if I do

$ echo foo "bar
more bar" /tm

No completion is done. My question is doest this behavior a feature or a
bug?

My bash version is

GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)

Thanx in advance.


bash completion after a multiline string

2021-06-21 Thread Phi Debian
Hi All,

I posted a question to STKO and someone suggest I should open a 'feature
request' here.

https://stackoverflow.com/questions/68065039/bash-completion-after-a-multiline-string

Thanx in advance,
Phi


Re: command_not_found_handle() flaw

2020-03-11 Thread Phi Debian
Hi All,

Ok I got the picture now, and I owe you some apologize, but with what I got
in front of me (ubuntu 20.04, bash-git latest) mistake from my side was
easy to do.

Regarding the docco, I admit I got the latest bash source (git) BUT I read
the ubuntu 20.04 doc, that still DON'T mention the separate context (for
the 2 years to come :) ), when reading the 5.x latest bash.1 I reckon the
separate context is there, and up to that point (i.e with old doc in mind)
I could not think that this function was a "separate context"

Then the $$ vs $BASHPID confusion was a consequence of this.

So now the questionning about the implementation of this 'feature' that
seems to be, 'how to do something when a command is not found in an command
interpreter' and the idea behind is the implementation of a mechanism for
command interpreter (at large) to be able to suggest 'distro package
suggestion', or if the goal was not that at least this is how it is used a
the moment.

To me, to fullfil the feature as stated here, 2 implementation where
possible.

- A pure 'handler' like a trap handler, then executed in the top shell
context. This handler has then to be installed (sourced) in the shell
startup (similar to now)
- A pure script/prog invocation, then in a separate process, then separate
context. Something along the line of
BASH_CNF && $BASH_CNF "$@", this is pseudo code done in
execute_disk_command(), i.e check for BASH var for user liking doing CNF
and if so, simply prepand $BASH_CNF (likely to be command-not-found)  (as
currently it prepand command_not_found_handle) and retry
search_for_command() once with the new run string, this way it would go and
chase command-not-found in whatever place distro place it in the PATH, (Or
user replacement), The distro would simply source
BASH_CNF=command-not-found, and the user have control in her rc path for
sdjusting the setting.

A function invocation in a sub-shell (separate context) is the worst idea,
simply because
1) It require doc fixup, what was done for 5.x and a clear refresh of the
users mind, misleading path.
2) It need some ugly code in execute_disk_command()

I think in the current logic (distro sugestion) a pure chase of
command-not-found in PATH would have been simpler to document and implement.

Yet I personnaly prefer the 'trap logic', because it serve the same
services as current (distro sugestions) yet open the door for other things,
like function autoloading, customised error handling on CNF.

After all a CNF situation is an error, and a trap 'blah' ERR is correctly
trigged, and the handler executed in the top shell context. But we don't
want to implement CNF handling via trap ERR, because trap '' ERR is a bit
heavy.
So as an 'enhancemen't 'may be dreamin', having an ERRCNF pseudo signal
would be ideal, could even keep the 'old' half hearted
command_not_found_handle() (not a handler, not a script) and work together.
I noted the subtility in that function name, it is not called
command_not_found_handler() just because it is not an handler :) that's
what should have warned me from the beginning :)


Anyway, I will cope with that, and I have an implementation of lazy
function autoload that works with the current implementation,
command_not_found() simply sending a signal to main shell, saying we got a
CNF :) ugly but it works.

I love bash and all the evolution of it.
Cheers,
Phi


Re: command_not_found_handle() flaw

2020-03-10 Thread Phi Debian
Hi All,

I think it is a bug, it is not working as intended at least in the man
page, it says.

If that function exists, it is invoked
   with the original command and the original command's arguments  as
 its
   arguments,  and  the  function's exit status becomes the exit status
of
   the shell.  If that function is not defined, the shell prints an
 error
   message and returns an exit status of 127.

A function invocation, don't imply a sub shell.

Secondly, if yuio insist on going on the subshell path, then $$ and $PPID
ought to be setup appropriately, here ther are plain bogus.

Third, the argument saying to late we forked already is dumb, it is too
late because the sequence

hookf = find_function (NOTFOUND_HOOK);
execute_shell_function (hookf, wl);

is misplaced after the fork, while it should be done before the fork.

I provided a fix that no one tested, that don't jeoparise the existing code
or behavior, and to respect what a function invocation is.

I can survive a "don't fix" well I can fix for myself, but has it is you
must at least fix 2 points, fix the doc, sayin the
command_not_found_handle() is called in a subshell and fix $$ and $PPID
that are bogus in the subshell.

BTW fixing $$ and $PPID will take more effort that just placing the hook
before the fork,  but that's your decision indeed :)

I think that either command_not_found_handle() is a gross hack to satisfy
command-not-found distro package, and in this case it should simply not be
documented in the docco, or else it should be done as specifed i.e a
function incovation is not a subshell running a function, that frankly is a
bit ridiculous.

Cheers,
Phi


Patch: command_not_found_handle() flaw

2020-03-10 Thread Phi Debian
diff --git a/execute_cmd.c b/execute_cmd.c
index 3864986d..ef32631a 100644
--- a/execute_cmd.c
+++ b/execute_cmd.c
@@ -5370,6 +5370,20 @@ execute_disk_command (words, redirects,
command_line, pipe_in, pipe_out,

   command = search_for_command (pathname, CMDSRCH_HASH|(stdpath ?
CMDSRCH_STDPATH : 0));
   QUIT;
+
+#define PHI_CNF 1
+#if(PHI_CNF)
+  if (command == 0)
+  { hookf = find_function (NOTFOUND_HOOK);
+   if(hookf)
+{ wl = make_word_list (make_word (NOTFOUND_HOOK), words);
+  result=execute_shell_function (hookf, wl);
+  wl->next=0;
+  dispose_words(wl);
+  goto parent_return;
+}
+  }
+#endif  // PHI_CNF

   if (command)
 {
@@ -5448,7 +5462,10 @@ execute_disk_command (words, redirects,
command_line, pipe_in, pipe_out,

   if (command == 0)
{
+#if(!PHI_CNF)
+// with PHI_CNF==1  hookf==0
  hookf = find_function (NOTFOUND_HOOK);
+#endif
  if (hookf == 0)
{
  /* Make sure filenames are displayed using printable
characters */


command_not_found_handle() flaw

2020-03-10 Thread Phi Debian
Hi All,

While trying to implement  an function autoload feature, with a lazier
function loading and invocation than the example provided in bash doc, I
bumped into this bash bug (or feature, you let me know).

I asked stackoverflow for some ideas, one pointed me to
command_not_found_handle() that I was not aware off and looked promising
for what I wanted to achieve.

This SO post is quite big, so I will report here a shorter version.

In a nutshell to implement a function autoloading I want to plug into
command_not_found_handle(), but I need command_not_found_handle()be
evaluated in the shell context, not in a subshell.

At the moment BASH_VERSION='5.0.16(14)-release' (and all way back) the is
wrongly called as a sub-shell, that doesn't hurt the command-not-found
packages found in many distro as at this point they don't care about
setting things in the shell instance, so their implementation is good
enough.

To demonstrate the bug, suffice to do this


function command_not_found_handle
{ echo $$ ; sh -c 'echo $PPID'
}

When run in shell context we got the correct answer
$ function command_not_found_handle
> { echo $$ ; sh -c 'echo $PPID'
> }
$
$  command_not_found_handle
11370
11370

But when called from 'not found' context we got
$ ddd # dd not found on my system then call command_not_found_handle
11779
11783
$

As we see here, $$ lies pretend to be the shell PID 11779 while it real $$
is 11783, so a child.

This is a real problem because then there is nothing I can do in
command_not_found_handle) to setup things in the shell context, i.e not new
alias setup, no new functin define, etc...

I propose a fix, yet bear in mind hacking bash is not my day to day job :)
so it must be reviewed by gurus. I send the proposed [atch separatly.

With this fix I got this run
PW$ PS1='$ '
$ function command_not_found_handle
> { echo $$ ; sh -c 'echo $PPID'
> }
$ ddd
12165
12165

$ function command_not_found_handle
> { echo in cnf
>   A="cnf-was-called"
>  return 127
> }
$ A=''
$ ddd
in cnf
$ echo $A
cnf-was-called
$

Meaning now command_not_found_handle() is a real function called from the
shel context, shell context can then be touch (vars, alias, functions,
etc..) what I need to implement my autoload mechanism.

Not that it is indeed possible to chain the handlers, so assuming a
command-not-found package installed I got this on ubuntu.
$ declare -f command_not_found_handle
command_not_found_handle ()
{
if [ -x /usr/lib/command-not-found ]; then
/usr/lib/command-not-found -- "$1";
return $?;
else
if [ -x /usr/share/command-not-found/command-not-found ]; then
/usr/share/command-not-found/command-not-found -- "$1";
return $?;
else
printf "%s: command not found\n" "$1" 1>&2;
return 127;
fi;
fi
}

And a run gives

$ ddd

Command 'ddd' not found, but can be installed with:

sudo apt install ddd


If I now want to install my handler I rename the current handler
# Rename current command_not_found_handle
_cnf_body=$(declare -f command_not_found_handle | tail -n +2)
eval "function _cnf_prev $_cnf_body"

Then I create mine chaining to the prev one

function command_not_found_handle
{ echo in cnf
  A="cnf-was-called"
  _cnf_prev "$@"
}

 $ unset A
$ ddd
in cnf

Command 'ddd' not found, but can be installed with:

sudo apt install ddd

$ echo $A
cnf-was-called

This shows that my handler was called, on failure to find a function, it
forward to the previous handler, then it shows that it ran in the shell
contect as A= is set correfctly.

I got no memory leak detected with this patch, but again review thouroughly.

Hope this helps.

Cheers,
Phi


Re: Bracketed paste mode breaks cooked mode's tab + backspace

2018-01-30 Thread Phi Debian
Hi Egmont

On Tue, Jan 30, 2018 at 9:47 AM, Egmont Koblinger  wrote:

> Not sure why you don't see this bug. (You have at least v4.4, and
> started a new shell after enabling bracketed paste, is that right?)
> One theoretical explanation could be that your kernel's tty driver is
> slightly different and expects the leading ESC byte to also move the
> cursor, resulting in 8 characters in total, making no difference in
> the modulo 8 computation for the tabs. (I don't think this is the
> explanation, though.)

I guess I got an std kernel driver :)
TC$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 16.04.3 LTS
Release:16.04
Codename:   xenial



>
> Even without bracketed paste mode, the issue should be reproducible
> with commands like:
> echo -ne '\e[1m'; cat
> Obviously it's not bash's fault and there's nothing bash could or
> should do with this one.

Partially reproduce,
   is ok, i.e jump fwd 8 and backward 8
XX  is ok got X jump fwd 7 X backward 7
Xwrong got X jump fwd 7 backward 4 (yet ok in the buffer)

>
> gnome-terminal doesn't use X resources either. gnome-terminal and
> terminator both use the same VTE widget for terminal emulation, and
> both have a setting under profile prefs -> compatibility to specify
> what Backspace and Delete do.

Gnome-terminal has its own backspace key mapping (so no xresource, yet
mapped ok), terminator unfortunatly don't have it :)

xterm use xresources, that what I use i.e

xmodmap backspace give 0x8
xresources VT100 Backsapce 0x08
stty erase ^H

With that I am almost an happy camper :)


Cheers,
Phi



Re: Bracketed paste mode breaks cooked mode's tab + backspace

2018-01-29 Thread Phi Debian
Hi All,

I don't reproduce it.

This is not directly related to bash, it depend on the line discipline
on the tty, I use  ^H as erase and got no problems with .
gnome-term, xterm are ok with ,  generate 0x8, terminator
is confused is it doesn't use X resouces and then can't map 0x8 on


Using  generate  0x7f is a flaw since ages :)

Cheers,
Phi



Re: How to run bash test suite

2017-07-03 Thread Phi Debian
Hi Chet,
Thanx for the primer on QA :)

I did manage to run them.
I reworked run-all to run all this in parallel, I fixed some to ad
more determinism in the output (background jobs), I got now a fast way
to run the QA and that trig attention flag only when needed.

Let's hack the bash now :)

Cheers,
Phi



How to run bash test suite

2017-07-02 Thread Phi Debian
Hi All,
I tried to subscribe to bug-bash but never got the confirmation mail.

I grabed the latest source code available with my ubunto distro and
made a bash build. All is fine. I'd like to run the test suite, and I
found no docco about it. Did a brute "make test" from the build src
dir, but I don't understand how to decipher all the output there,
should I trust it and place it in a reference output before doing
shell hacks experiments and rerun and compare outputs ? Or does the
test self sufficient, i.e produce errors in its outputs ? I can see
warning: may be errors will shows up as error: and then a simple make
test >out 2>&1 is enough then grep error: out

Any help appreciated.
Cheers,
Phi