Hello.

Well first i was thinking of a "%*" special so i can say
"kill %*", but get_job_spec() and usage does not look promising.

The task: close all jobs at once (and dreaming of
%ID1-%ID2,ID3-ID4, etc).  Ie i often (really) am doing things that
require many instances of less(1), or git(1) log, and such, but
then the task is done and all of those can (and shall) vanish at
once.  I mean, ok, i *could* create a recursive bash(1) and then
simply quit the shell, maybe, ... but often i do not know in
advance such a sprint starts.

The "problem" with the current way bash is doing it is that bash's
job handling does not recognize jobs die under the hood:

  $ jobs
  [1]-  Stopped                 LESS= less -RIFe README
  [2]+  Stopped                 LESS= less -RIFe TODO
  $ kill $(jobs -p)
  $

^ nothing

  $ jobs
  [1]-  Stopped                 LESS= less -RIFe README
  [2]+  Stopped                 LESS= less -RIFe TODO
  $ fg
  LESS= less -RIFe TODO

?=15

  $ fg
  LESS= less -RIFe README

?=15
  $

Compared to proper job stuff:

  $ jobs
  [1]   Stopped                 git loca
  [2]-  Stopped                 LESS= less -RIFe TODO
  [3]+  Stopped                 LESS= less -RIFe README
  $ kill %2 %3
  $
  [2]   Exit 15                 LESS= less -RIFe TODO
  [3]-  Exit 15                 LESS= less -RIFe README
  $

If there would be a "jobs -i" to properly unfold living jobs IDs
in equal spirit to -p that prints job's process IDs, then this
would be great.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)

Reply via email to