Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-09 Thread Linda Walsh



Greg Wooledge wrote:

there is a
file named /etc/udev/rules.d/70-persistent-net.rules 


Perhaps you should get in touch with a SuSE mailing list and see how
other SuSE users have dealt with this.  You might be reinventing the
wheel.

---
SuSE has had the same file.  At first we were told port name customization
was going away.  That didn't go well, so they decided to continue to support
the udev rules for this next release.  I.e. -- it was going to be removed
this release, but but that decision got reversed this time.  I figure it's
just a matter of time, like other things they wanted to get rid of.

Seems the systemd people may put in a feature to
deal with this, but they really would rather people use hardware-busmapped
based names like p5p1 p5p2 p3p1 p3p2, that would changed if you changed from
one slot to another (those are what I get for eth[0145]).

So... think I got in to the current system in  some early incarnation of it.
It left my old interfaces the same, but new interfaces were added according
to the new scheme with no easy way to get old behavior back.  After manually
renaming them, they stayed, but again... for how long?  I figured I should
have a script that would set/reset them to consistent values so I would
have to deal with variable new release policies.

Even using basic color and status scripts from their process is problematic
now, as it has checks to no longer call scripts -- but systemd, if it's
configs tell it to -- i.e. a status include file was
upgraded to forward some requests to systemd, while others were replaced by
systemd commands.  Nice status.




Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-08 Thread Greg Wooledge
On Tue, Oct 07, 2014 at 03:03:43PM -0700, Linda Walsh wrote:
 It's not thoroughly  polished yet, but it does work.  It can rename and 
 reorder your
 network interfaces on boot so you can keep them with consistent names and
 ordering -- something systemd doesn't seem to support.

I don't know SuSE, and I don't know systemd, but Debian solved this
problem years ago.  They use udev hooks.  Specifically, there is a
file named /etc/udev/rules.d/70-persistent-net.rules that holds the MAC
address of each interface, and the desired interface name for it.
I don't personally know all the details of how it works, because I never
NEEDED to learn them.

Perhaps you should get in touch with a SuSE mailing list and see how
other SuSE users have dealt with this.  You might be reinventing the
wheel.



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Linda Walsh



Pierre Gaston wrote:


b=$a is not doing anything so I wonder how much value this example has.

---
I wondered about that.. think that was meant to be the
b=($a) w/o the copy that greg said was pointless.


A pipe means 2 different processes, a tempfile for a heredoc does not.


First) we are talking 2 separate processes...
That I fit those into a heredoc paradigm was already a kludge.

originally I had, (and wanted)

producer-func  call it prod()
and process-func+store-in-global for future

i.e. producer|process-func - vars... but I want the vars in the parent... so

processfunc |(producer) -- now process can store results but it's using
bogus magic called process substitution... naw...
it's a LtR pipe instead of a RtL pipe. or

parent |(child) pipe

instead of the standard:

child|parent pipe.   The parent being the one who's vars live on
afterwards.

I thought I was getting rid of this bogus problem (which shouldn't
be a problem anyway -- since it's just another pipe but with parent receiving
the data instead of child receiving it) by storing it in a variable transfer 
form
($VAR)... cuz was told that didn't involve a voodoo process-substitution.

But it involves a heredoc that even though /tmp and /tmp/var are writeable,
BASH can't use.  Either it's writing to //network or trying to open a
tmp file in the current (/sys) directory... eitherway is buggy.

But a heredoc is just reading from an in-memory buffer.  That bash is
going the inefficient route and putting the buffer in tmp, first, is
the 2nd problem you are pointing out -- why do people who want a heredoc
need a fork OR a tmpfile?   The output I generated is  512 bytes.  Would it
be too silly to simply put any heredoc output X[KMG]** in memory and not
write it to any file?  shopt heredocsize=XXX[KMG]... and overflow to tmp
if needed.

But more basically (no limits need be implemented):
why there is there a difference between :

parent producer | child reader
   vs.
parent reader |  child producer

???

If you want to ask your Q, you should be asking why fork OR *tmpfile.

In my case, Why do I need a voodoo process that can't be done with pipes..
cuz I'm pretty sure a parent can spawn children and then read from them
just as easily as write to them.






Besides your comparison is simply ludicrous, bash provides several
mechanisms do to several very different things. 

---
Having several that would work woudl be nice.  Um... having 1 that would
work even... how do you turn around parent/child I/O and get the vars on the
other side without heredoc or procsub?  Why no pipes?




Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Dave Rutherford
On Tue, Oct 7, 2014 at 2:25 AM, Dave Rutherford d...@evilpettingzoo.com
wrote:
**it.. sorry for the fat finger post. Gmail puts the tiny formatting options
right next to the big SEND button. Ratzen fracken.


Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Dan Douglas
On Monday, October 06, 2014 02:00:47 PM Linda Walsh wrote:
 
 Greg Wooledge wrote:
  On Mon, Oct 06, 2014 at 12:38:21PM -0700, Linda Walsh wrote:
  According to Chet , only way to do a multi-var assignment in bash is
 
  read a b c d  $(echo ${array[@]})

It's best to avoid turning your nice data structure into a stream only to read 
(and mangle) it back again.

If you must, create a function.

function unpack {
${1+:} return 1
typeset -n _arrRef=$1 _ref
typeset -a _keys=(${!_arrRef[@]})
typeset -i n=0
shift

for _ref; do
${_keys[n]+:} return
_ref=${_arrRef[${_keys[n++]}]}
done
}

#  $ a=(1 2 3) e=hi; unpack a b c d e; typeset -p {a..e}
# declare -a a='([0]=1 [1]=2 [2]=3)'
# declare -- b=1
# declare -- c=2
# declare -- d=3  

  
# declare -- e=hi

 Um... it used a socket.. to transfer it, then it uses a tmp file on top
 of that?!  :

That's the wrong process. AF_NETLINK is the interface used by iproute2 for IPC 
with the kernel. Bash doesn't use netlink sockets.

  So why would someone use a tmp file to do an assignment.
  
  It has nothing to do with assignments.  Temp files are how here documents
  (and their cousin here strings) are implemented.  A here document is a
  redirection of standard input.  A redirection *from a file*
 
   Nope:
   This type of redirection instructs the shell to  read  input  from  the
 current source until a line containing only delimiter (with no 
trailing
 blanks) is seen.  All of the lines read up to that point are then  
used
 as the standard input for a command.
 
 The current source -- can be anything that input can be
 redirected from -- including memory.
 
 
  in fact,
  albeit a file that is created on the fly for you by the shell.
 
 Gratuitous expectation of slow resources...  Non-conservation
 of resources, not for the user, but for itself.
 
 Note:
  a=$(foo)
  echo $a
 one
 two
 three
 ---
 no tmp files used, but it does a file read on foo.
 
 b=$a
 -- the above uses no tmp files.
 
 b=$(echo $a)
 ---
 THAT uses no tmp files.
 
 But
 b=$a
 
 uses a tmp file.
 
 That's ridiculously lame.

That isn't working code the way you think (except in mksh). In most shells its 
just a null command that sets b to empty.

Just perform the assignment directly. There's no need for any special 
expansion or redirect.

  So why would a temp file be used?
  
  Historical reasons, and because many processes will expect standard input
  to have random access capabilities (at least the ability to rewind via
  lseek).  Since bash has no idea what program is going to be reading the
  here document, why take chances?
 
   How could you use a seek in the middle of a HERE doc
 read?
 
 Not only that, it's just wrong./dev/stdin most often comes from
 a tty which is not random access.

There is no requirement for how heredocs are implemented. Temp files are 
perfectly reasonable and most shells use them. The shell reads the heredoc, 
but it is the command that would be doing any seeking, or possibly the shell 
itself via nonstandard features like the #(()) redirect.

This can be an optimization. Consider:

for file in file1 ... filen; do
ed -s $file /dev/fd/0
done \EOF
ed script...
wq
EOF

In the above case, bash doesn't actually have to re-create the input 
repeatedly, it just creates a new FD, effectively rewinding the input.

-- 
Dan Douglas



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Chet Ramey
On 10/6/14, 3:14 PM, Linda Walsh wrote:
 In running a startup script, I am endeavoring not to use tmp files
 where possible, As part of this, I sent the output of a command
 to stdout where I read it using the variable read syntax:
 
while read ifname hwaddr; do
printf ifname=%s, hwaddr=%s\n $ifname $hwaddr
act_hw2if[$hwaddr]=$ifname
act_if2hw[$ifname]=$hwaddr
printf act_hw2if[%s]=%s, act_if2hw[%s]=%s\n
 ${act_hw2if[$hwaddr]} ${act_if2hw[$ifname]}
done $(get_net_IFnames_hwaddrs)
 
 Note, I used the $() form to avoid process substitution -- as I was told
 on this list that the  form didn't use process substitution.

Correct, it does not.

 
 So I now get:
 /etc/init.d/boot.assign_netif_names#192(get_net_IFnames_hwaddrs) 
 echo eth5 a0:36:9f:15:c9:c2
 /etc/init.d/boot.assign_netif_names: line 203: cannot create temp file for
 here-document: No such file or directory
 
 Where am I using a HERE doc?

You're using a here-string, which is a variant of a here-document.  They
differ very little, mostly in syntax.  The internal implementation is
identical.

 More importantly, at this point, where is it trying to write?/(Read).
 
 Simple Q.  Why isn't it using some small fraction of the 90+G of memory
 that is free for use at this point?

Because the shell uses temp files for this.  It's a simple, general
solution to the problem of providing arbitrary data on stdin.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Chet Ramey
On 10/6/14, 5:00 PM, Linda Walsh wrote:

 You are working in a severely constrained environment. 
 That isn't the problem:  the assignment using a tmp file is:
 strace -p 48785 -ff
 Process 48785 attached
 read(0, \r, 1)= 1
 write(2, \n, 1)   = 1
 socket(PF_NETLINK, SOCK_RAW, 9) = 3
 sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=},
 msg_iov(2)=[{*\0\0\0d\4\1\0\0\0\0\0\0\0\0\0, 16}, {read a b c d
 ${arr[@]}\0, 26}], msg_controllen=0, msg_flags=0}, 0) = 42
 close(3)= 0
 -
 Um... it used a socket.. to transfer it, then it uses a tmp file on top
 of that?!  :

Slow down and read the trace, espcially the data being transferred.  This
is syslog saving the command.

 rt_sigaction(SIGINT, {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0},
 {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 8) = 0
 open(/tmp/sh-thd-110678907923, O_WRONLY|O_CREAT|O_EXCL|O_TRUNC, 0600) = 3
 write(3, one two three four, 18)  = 18
 write(3, \n, 1)   = 1
 open(/tmp/sh-thd-110678907923, O_RDONLY) = 4
 close(3)= 0
 unlink(/tmp/sh-thd-110678907923)  = 0
 fcntl(0, F_GETFD)   = 0
 fcntl(0, F_DUPFD, 10)   = 10
 fcntl(0, F_GETFD)   = 0
 fcntl(10, F_SETFD, FD_CLOEXEC)  = 0
 dup2(4, 0)  = 0
 close(4)= 0
 ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS,
 0x7fff85627820) = -1 ENOTTY (Inappropriate ioctl for device)
 lseek(0, 0, SEEK_CUR)   = 0
 read(0, one two three four\n, 128)= 19
 dup2(10, 0) = 0
 fcntl(10, F_GETFD)  = 0x1 (flags FD_CLOEXEC)
 close(10)   = 0
 -
 
 Why in gods name would it use a socket (still of arguable overhead, when
 it could be done in a local lib), but THEN it duplicates the i/o in a file?

Read the trace.  There's no duplication going on.  The shell uses temp
files to implement here documents -- an implementation choice you may
disagree with under your peculiar circumstances but one that is simple
and general.


 Maybe the best solution here is to move your script to a different part
 of the boot sequence.  If you run it after all the local file systems
 have been mounted, then you should be able to create temporary files,
 which in turns means  and  become available, should you need them.
 
 Theoretically, they ARE mounted What I think may be happening
 is that $TMP is not set so it is trying to open the tmp dir in:
 
 //sh-thd-183928392381 -- a network address.

This is not a network address.  The pathname you've shown is consistent
with TMPDIR being set to a single slash (TMPDIR=/), which is a directory
writable by the current user.

You might try setting TMPDIR=/tmp in your startup script.


 ---
 Correctness before elegance.  1), use memory before OS services.
 2) use in-memory services before file services, 3) Don't use uninitialized
 variables (TMP) -- verify that they are sane values before usage.
 4) don't use network for tmp when /tmp or /var/tmp would have worked just
 fine.

This is basically word salad.


  This type of redirection instructs the shell to  read  input  from  the
current source until a line containing only delimiter (with no trailing
blanks) is seen.  All of the lines read up to that point are then  used
as the standard input for a command.
 
 The current source -- can be anything that input can be
 redirected from -- including memory.

In your case, the current source is the script, where the data appears.

 How could you use a seek in the middle of a HERE doc
 read?

Think about your use cases.  Many programs check whether or not their
input is a seekable device so they can read more than one character at
a time.

 Not only that, it's just wrong./dev/stdin most often comes from
 a tty which is not random access.

Again, it depends on your use cases.  There are a number of programs
whose primary use is in a filter environment rather than directly by
a user at a terminal.

 Creating a tmp file to do an assignment, I assert is a bug.

You're still mixing a general capability with your use of that capability.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Chet Ramey
On 10/7/14, 2:07 AM, Linda Walsh wrote:

 I thought I was getting rid of this bogus problem (which shouldn't
 be a problem anyway -- since it's just another pipe but with parent receiving
 the data instead of child receiving it) by storing it in a variable
 transfer form
 ($VAR)... cuz was told that didn't involve a voodoo process-substitution.

I suppose it's voodoo because you didn't understand the underlying OS
resource process substitution uses and therefore didn't realize when that
resource is and is not available.  People using process substitution using
/dev/fd on a system like FreeBSD also encounter similar issues, since
/dev/fd is a separate file system that must be mounted.

 
 But it involves a heredoc that even though /tmp and /tmp/var are writeable,
 BASH can't use.  Either it's writing to //network or trying to open a
 tmp file in the current (/sys) directory... eitherway is buggy.

You misunderstand what is going on, but now that you know here documents
and here strings use temporary files, and that, as documented, bash uses
$TMPDIR as the directory where it creates these temporary files, you can
modify your script to set TMPDIR appropriately.

 
 But a heredoc is just reading from an in-memory buffer.  That bash is
 going the inefficient route and putting the buffer in tmp, first, is
 the 2nd problem you are pointing out -- why do people who want a heredoc
 need a fork OR a tmpfile?   The output I generated is  512 bytes.  Would it
 be too silly to simply put any heredoc output X[KMG]** in memory and not
 write it to any file?  shopt heredocsize=XXX[KMG]... and overflow to tmp
 if needed.

Maybe.  At some point, however, you have to turn it into a file descriptor,
because that is what the redirections produce.  That constrains your set of
choices.  You could invent new syntax, but it would not be a here-document
or here-string any more.

 Having several that would work woudl be nice.  Um... having 1 that would
 work even... how do you turn around parent/child I/O and get the vars on the
 other side without heredoc or procsub?  Why no pipes?

There are several mechanisms.  You simply have to use them correctly.
Since you're running a non-interactive shell without job control enabled,
you could even use pipes and the `lastpipe' shell option, but that only
first appeared in bash-4.2.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Chet Ramey
On 10/7/14, 8:55 AM, Dan Douglas wrote:

 Um... it used a socket.. to transfer it, then it uses a tmp file on top
 of that?!  :
 
 That's the wrong process. AF_NETLINK is the interface used by iproute2 for 
 IPC 
 with the kernel. Bash doesn't use netlink sockets.

It's syslog.  Some vendors integrated patches that log all shell commands
to syslog.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Linda Walsh



Chet Ramey wrote:

On 10/7/14, 8:55 AM, Dan Douglas wrote:


Um... it used a socket.. to transfer it, then it uses a tmp file on top
of that?!  :
That's the wrong process. AF_NETLINK is the interface used by iproute2 for IPC 
with the kernel. Bash doesn't use netlink sockets.


It's syslog.  Some vendors integrated patches that log all shell commands
to syslog.

Chet


No... that wasn't syslog, it was strace in another terminal where I attached
the bacsh that was doing the various types of data transfer (pipes, process 
subst)
assignment... etc)


strace -p 48785 -ff

Process 48785 attached
read(0, \r, 1)= 1
write(2, \n, 1)   = 1
socket(PF_NETLINK, SOCK_RAW, 9) = 3
sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=}, 
msg_iov(2)=[{*\0\0\0d\4\1\0\0\0\0\0\0\0\0\0, 16}, {read a b c d 
${arr[@]}\0, 26}], msg_controllen=0, msg_flags=0}, 0) = 42

close(3)= 0
-
Um... it used a socket.. to transfer *i^Ht^H*SOMETHING,
then it uses a tmp file on top of that:

rt_sigaction(SIGINT, {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 
{0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 8) = 0

open(/tmp/sh-thd-110678907923, O_WRONLY|O_CREAT|O_EXCL|O_TRUNC, 0600) = 3
write(3, one two three four, 18)  = 18
write(3, \n, 1)   = 1
open(/tmp/sh-thd-110678907923, O_RDONLY) = 4
close(3)= 0
unlink(/tmp/sh-thd-110678907923)  = 0
fcntl(0, F_GETFD)   = 0
fcntl(0, F_DUPFD, 10)   = 10
fcntl(0, F_GETFD)   = 0
fcntl(10, F_SETFD, FD_CLOEXEC)  = 0
dup2(4, 0)  = 0
close(4)= 0



Bash may not use netlink sockets directly, but perhaps that was a weird looking 
call
to 'nss' to do some name or group lookup... but the presence of the read text as
part of one of the messages seems a bit peculiar.


Of NOTE:
straces I ran with the system up in normal RL3 -- that is where I see it writing
to tmp.  I have neither TMP nor TMPDIR set at this point.  I don't
think TMP or TMPDIR is set in the runscript at boot, but the script could be 
inheriting

a value from the bootup-infrastructure environment.  I would have expected them
to write into /tmp -- which was why in one of the listings I did the du on 
both
/tmp and /var/tmp to show they were mounted and did a test write (touch) on
a file in /tmp and verified I was able to do so and see it with ls.

If TMP{DIR} are set to a value, it isn't /tmp or /var/tmp since both are up 
and
available at that point.

The problem is that if I manually break into the boot process (this system goes
through a 'B' state to do these things:

/etc/rc.d/boot.d/S01boot.sysctl@
/etc/rc.d/boot.d/S01boot.usr-mount@
/etc/rc.d/boot.d/S02boot.udev@
/etc/rc.d/boot.d/S04boot.lvm@
/etc/rc.d/boot.d/S05boot.clock@
/etc/rc.d/boot.d/S06boot.localnet@
/etc/rc.d/boot.d/S07boot.cgroup@
/etc/rc.d/boot.d/S11boot.localfs@
/etc/rc.d/boot.d/S13boot.klog@
/etc/rc.d/boot.d/S13boot.lvm_monitor@
/etc/rc.d/boot.d/S13boot.swap@
/etc/rc.d/boot.d/S13setserial@
/etc/rc.d/boot.d/S14boot.isapnp@
/etc/rc.d/boot.d/S14boot.ldconfig@
/etc/rc.d/boot.d/S50boot.assign_netif_names@
/etc/rc.d/boot.d/S50boot.sys_settings@
/etc/rc.d/boot.d/S50boot.sysstat@

the script in question is assign_netif_names run fairly late.
But if I Break in on 'B', I get control before S01boot.  If I run the command
manually at that point, **it works**.   Only when I'm letting it boot normally, 
does
it hit this problem.

During a normal boot some of these are happening in parallel, however, that was
why I tested the writability of /tmp (verified it to be so).  I'll print out
TMP and TMPDIR -- and see if they contain unexpected values, but my assumption 
has
been they are empty then as they are after boot, with /tmp taking the tmp file 
(as
was the case in the strace I ran w/system up).

The messages I see, record and transmit are the boot logs -- not trace logs, and
show the kernel boot followed by a switchover (when klog is called) from the
kernel format to a more user-ish format i.e:
[4.058222] rtc_cmos 00:01: RTC can wake from S4
[4.058567] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
[4.058607] rtc_cmos 00:01: alarms up to one day, y3k, 242 bytes nvram, hpet 
irqs
[4.058772] device-mapper: uevent: version 1.0.3
[4.059034] device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised...

.root fs is next:
( mount |grep sdc1
/dev/sdc1 on / type xfs (rw,nodiratime,relatime,attr2,inode64,noquota))

[4.356688] XFS (sdc1): Mounting V4 Filesystem
[4.423530] usb 1-3.1: new low-speed USB device number 3 using ehci-pci
[4.447749] XFS (sdc1): Ending clean mount
[4.451993] VFS: Mounted root (xfs filesystem) on device 8:33.
[4.461451] devtmpfs: mounted

... last kernel USB probe.. then a 15-second gap while MOST of the disk

Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Greg Wooledge
On Tue, Oct 07, 2014 at 09:54:21AM -0700, Linda Walsh wrote:
 Chet Ramey wrote:
 It's syslog.  Some vendors integrated patches that log all shell commands
 to syslog.
 
 No... that wasn't syslog, it was strace in another terminal where I attached
 the bacsh that was doing the various types of data transfer (pipes, process 
 subst)
 assignment... etc)

Chet is saying that your vendor's version of Bash may have been patched
to make client-side calls to syslog(2), and this might be what you are
seeing in your strace.

If it turns out this is what's causing your problem, then you'll have
some tough choices to make.

 Um... it used a socket.. to transfer *i^Ht^H*SOMETHING,
 then it uses a tmp file on top of that:
 
 rt_sigaction(SIGINT, {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 
 {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 8) = 0
 open(/tmp/sh-thd-110678907923, O_WRONLY|O_CREAT|O_EXCL|O_TRUNC, 0600) = 3
 write(3, one two three four, 18)  = 18
 write(3, \n, 1)   = 1
 open(/tmp/sh-thd-110678907923, O_RDONLY) = 4
 close(3)= 0
 unlink(/tmp/sh-thd-110678907923)  = 0

The opens  writes  close  unlink are straight out of redir.c function
here_document_to_fd.  So that's bash opening the temp file for the
here document.  Any socket stuff before that could be from a vendor
patch, or a child process, or something else other than vanilla bash.

 I have neither TMP nor TMPDIR set at this point.  I don't
 think TMP or TMPDIR is set in the runscript at boot, but the script could 
 be inheriting
 a value from the bootup-infrastructure environment.

You could always add some debugging lines to the script to find out.

 the script in question is assign_netif_names run fairly late.
 But if I Break in on 'B', I get control before S01boot.  If I run the 
 command
 manually at that point, **it works**.   Only when I'm letting it boot 
 normally, does
 it hit this problem.

Ouch.

 /etc/init.d/boot.assign_netif_names#193(get_net_IFnames_hwaddrs) echo 
 eth5 a0:36:9f:15:c9:c2
 /etc/init.d/boot.assign_netif_names: line 204: cannot create temp file for 
 here-document: No such file or directory

This error message definitely indicates the failure of bash internal
function here_document_to_fd (in redir.c).  The question is exactly
what failed, and why.

I've given you a workaround for the immediate problem you're seeing,
so you could use that until you are able to diagnose the underlying
problem and find out why this particular feature is failing.

To reiterate (and expanding a bit, because I omitted a few details
in the previous message):

declare -A map1 map2 # etc.
here=$PWD
create_maps() {
cd /some/place || return
for ifname in ...; do
hwaddr=$(thing)
map1[$ifname]=$hwaddr
map2[$hwaddr]=$ifname
done
}
create_maps
cd $here || exit

Honestly, I don't understand why you are changing directory at all.
If it's just for the convenience of shortening the hwaddr=... command,
personally I'd drop the cd, stick the full path into the hwaddr=... line,
and thereby reduce the complexity another notch.  But if you think the
cd is needed, then drop the subshell, and add the additional checking
and final cd back to where I started to compensate (as I've shown).

The subshell is what's causing all of your grief (the serialization
and deserialization), so definitely drop that.



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Linda Walsh



Greg Wooledge wrote:

OK, then use a function to give you an escapable block:

declare -A ...
create_maps() {
cd $sysnet || return
for ifname in ...; do
hwaddr=$($ifname/address)
act_hw2if[$hwaddr]=$ifname
act_hw2if[$ifname]=$hwaddr
done
}
create_maps


Either way, they code as you have suggested won't work
without overcoming another set of side effects.


There are ways to NOT use subshells.  I have given you two of those
ways now.



I appreciate the options, but the option I want is the parent staying
put, and only sending children out to change dir's.  Say some new
version of init wants to isolate init processes by putting them in their
own dir and then deleting the dir.  As long as they don't cd out of the
dir, they are fine, but if they do, they can't get back to it.

Why can't I spawn a child that cd's into a dir and reports back what
it finds in the dir?  I do this in perl or C w/no problem.

I have an iomonitor program, that monitors multiple data inputs.  For
each, I spawn a child that sits in that directory, holding the FD open
and rewinding it for updated status.  There's no constant
opening/reopening of files -- no walking paths.  Each time you open
a path, the kernel has to walk the path -- if cached, happens very
quickly -- but still work as it also has to check access on each
traversal (it can change).  If your data-gathering routines sit where
the data is and don't move, there is no waisted I/O or CPU accessing the
data and the parent gets updates via pipes.

There is no fundamental reason why, say, process substitution needs to
use /dev/fd or /proc/anything -- and couldn't operate exactly like piped
processes do now.  On my first implementation of multiple IPC programs,
I've used semaphores, message queues, named pipes, and shared memory.
With the exception of using shared memory for mutiple megabytes of
memory for each of several children, pipes provided equal or better
performance.

It was certainly fun to try the various methods and compare them, but
having an implementation that only depends on the LCD (pipes in this
case), would make the code more portable and faster.  For HEREDOC's
there is no reason why chet couldn't use his own STDIO lib for the needs
of bash that keep everything in memory or spill on extra large sizes.

Design choice doesn't have to become feature constraint -- or if it
does, then it's time to think about a different design choice.  Then all
of those features that currently use linux-only or linux-like extensions
like proc and /dev/fd can be ported to platforms that don't support such
a wide feature set.






Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Pierre Gaston
On Tue, Oct 7, 2014 at 8:45 PM, Linda Walsh b...@tlinx.org wrote:



 Greg Wooledge wrote:

 OK, then use a function to give you an escapable block:

 declare -A ...
 create_maps() {
 cd $sysnet || return
 for ifname in ...; do
 hwaddr=$($ifname/address)
 act_hw2if[$hwaddr]=$ifname
 act_hw2if[$ifname]=$hwaddr
 done
 }
 create_maps

  Either way, they code as you have suggested won't work
 without overcoming another set of side effects.


 There are ways to NOT use subshells.  I have given you two of those
 ways now.

 

 I appreciate the options, but the option I want is the parent
 staying
 put, and only sending children out to change dir's.  Say some new
 version of init wants to isolate init processes by putting them in their
 own dir and then deleting the dir.  As long as they don't cd out of the
 dir, they are fine, but if they do, they can't get back to it.

 Why can't I spawn a child that cd's into a dir and reports back
 what
 it finds in the dir?  I do this in perl or C w/no problem.

 I have an iomonitor program, that monitors multiple data inputs.  For
 each, I spawn a child that sits in that directory, holding the FD open
 and rewinding it for updated status.  There's no constant
 opening/reopening of files -- no walking paths.  Each time you open
 a path, the kernel has to walk the path -- if cached, happens very
 quickly -- but still work as it also has to check access on each
 traversal (it can change).  If your data-gathering routines sit where
 the data is and don't move, there is no waisted I/O or CPU accessing the
 data and the parent gets updates via pipes.

 There is no fundamental reason why, say, process substitution needs to
 use /dev/fd or /proc/anything -- and couldn't operate exactly like piped
 processes do now.  On my first implementation of multiple IPC programs,
 I've used semaphores, message queues, named pipes, and shared memory.


That's where you are wrong, there is no reason for *your* use case, but the
basic idea behind process substitution is to be able to use a pipe in a
place where you normally need a file name.


Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Chet Ramey
On 10/7/14, 1:45 PM, Linda Walsh wrote:

 There is no fundamental reason why, say, process substitution needs to
 use /dev/fd or /proc/anything -- and couldn't operate exactly like piped
 processes do now.  On my first implementation of multiple IPC programs,
 I've used semaphores, message queues, named pipes, and shared memory.
 With the exception of using shared memory for mutiple megabytes of
 memory for each of several children, pipes provided equal or better
 performance.

I think you have a fundamental misunderstanding of the purpose of
process substitution.  It is a way to turn a pipe to another process
into a file name, so that you can open what looks like a file, get a
file descriptor, use the normal set of read and write calls on it, and
all the while talk to another process.  There are only a few ways to
do that, and /dev/fd and named pipes are really the only viable ones.

 It was certainly fun to try the various methods and compare them, but
 having an implementation that only depends on the LCD (pipes in this
 case), would make the code more portable and faster.  For HEREDOC's
 there is no reason why chet couldn't use his own STDIO lib for the needs
 of bash that keep everything in memory or spill on extra large sizes.

Gosh, you mean I could write a stdio library that would have to be portable
over the whole set of systems on which bash currently runs and support it
forever, abandoning all the work vendors have done?  It sounds too good to
be true!

 Design choice doesn't have to become feature constraint -- or if it
 does, then it's time to think about a different design choice.  Then all
 of those features that currently use linux-only or linux-like extensions
 like proc and /dev/fd can be ported to platforms that don't support such
 a wide feature set.

/dev/fd (which I prefer to /proc) is not Linux specific.  Named pipes are
standard.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Linda Walsh

Pierre Gaston wrote:

That's where you are wrong, there is no reason for *your* use case, but the
basic idea behind process substitution is to be able to use a pipe in a
place where you normally need a file name.
  
Well, that's not what I needed it for.  I needed to read from a child 
process that
changed directories. 


How can you say there is no reason for my use case...

It's not thoroughly  polished yet, but it does work.  It can rename and 
reorder your

network interfaces on boot so you can keep them with consistent names and
ordering -- something systemd doesn't seem to support.  Furthermore, it
doesn't depend on any suse library code -- which they've been putting in
hooks and traps for those not using systemd.  Supposedly people not using
systemd won't be able to boot their systems. 

This is one of my side projects to keep my system booting and to make 
available

to anyone who might want to use it.  But you call that no reason?

Sigh.

IF you want to see what I was doing in context, its attached.
It's a stand alone script to tell you if your interfaces are in your
desired order, reorder them if you wish, and act as a startup/shutdown 
script
to fix things each boot.  I haven't spent any time in cleanup, but it's 
still

not bad code.  See if the code is not readable -- even for shell!




#!/bin/bash 
#boot.assign_network_names
### BEGIN INIT INFO
# Provides:   net-devices
# Required-Start: boot.udev boot.device-mapper boot.localfs
# Required-Stop:  $null   
# Default-Start:   B
# Default-Stop:
# Short-Description:   reorder and rename net devices
# Description: reorder and rename net devs if needed
### END INIT INFO
#
# assign network names as rc-script
#   L A Walsh, (free to use/modify/distribute to nice people) (c) 2013-2014
#  
#include standard template:
_prgpth=${0:?}; _prgpth=${_prgpth#-} _prg=${_prgpth##*/}; 
_prgdr=${_prgpth%/$_prg}
[[ -z $_prgdr || $_prg == $_prgdr ]]  _prgdr=$PWD
#if ! typeset -f include /dev/null ;then 
#   source ${_LOCAL_DIR:=/etc/local}/bash_env.sh;
#fi
export PATH=/etc/local/bin:/etc/local/lib:$PATH
export 
PS4='${BASH_SOURCE:+${BASH_SOURCE[0]}}#${LINENO}${FUNCNAME:+(${FUNCNAME[0]})}
 '


#include stdalias (needed entries included inline, below)
shopt -s extglob expand_aliases

alias dcl=declare   sub=function 
alias int=dcl\ -i   map=dcl\ -A 
hash=dcl\ -Aarray=dcl\ -a
alias lower=dcl\ -l upper=dcl\ -u   
string=dcl  my=dcl
alias map2int=dcl\ -Ai  intArray=dcl\ -ia

cfg_fn=/etc/sysconfig/assign_netif_names

[[ -f $cfg_fn ]] || {
echo Cannot find required config file: $cfg_fn; exit 1; }

# expected variables to be read in
array needed_drivers
array Linknames
array Default
map if2hw

. $cfg_fn || { echo Error in reading config file $cfg_fn; exit 2; }

DfltRE=^+(${Default_good[@]})$
DfltRE=${DfltRE// /|}

declare -a down_ln=()
for ln in ${Linknames[@]}; do
[[ $DfltRE =~ $ln ]]  continue;
down_ln+=($ln)
done

map hw2if

for intf in ${!if2hw[@]} ; do
addr=${if2hw[$intf]}
hw2if[$addr]=$intf
done


# end read config ##

#include rc.status  -- essential funcs included below:
int rc_status=0
sub rc_reset { rc_status=0; }

sub rc_status { 
rc_status=$?;
if ((rc_status))   { (($#))  [[ $1 = -v ]] ; }; then
echo Abnormal rc_status was $rc_status. 
elif (($#))  [[ $1 = -v ]] ; then
echo rc_status: ok
fi
}

sub rc_exit {
rc_status=$?;
rc_status
exit $rc_status
}

sub warn () { local msg=Warning: ${1:-general}
printf %s 2 $msg
}

sub die () { int stat=$?; local msg=Error. ${1:-unknown}
echo $msg (errno=$stat) 2
(exit $stat);   
#sets $? (doesn't exit inside ()
rc_status -v 
rc_exit
exit $stat  
# exit unless die called in 
subshell
}

sysfs=/sys
sysnet=$sysfs/class/net

[[ -d $sysfs  -d $sysnet ]] ||
sysnet=$sysfs/class/net
sys_modules=$sysfs/module

if [[ -z $(type -P modprobe) ]]; then   # verify find of modprobe
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:$PATH
fi

if [[ -n $(type -P modprobe) ]]; then   # if found, alias it
alias modprobe=$(type -P modprobe)
else
# only throw 
error if needed
alias modprobe=die 'cannot load required modules'
fi

if [[ -z $(type -P ip ) ]]; then# ip is needed, 
so no delayed error
die Cannot find 'ip' util 

Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-07 Thread Chet Ramey
On 10/7/14, 5:35 PM, Linda Walsh wrote:
 
 
 Chet Ramey wrote:
 On 10/7/14, 1:45 PM, Linda Walsh wrote:

 There is no fundamental reason why, say, process substitution needs to
 use /dev/fd or /proc/anything -- and couldn't operate exactly like piped
 processes do now.  On my first implementation of multiple IPC programs,
 I've used semaphores, message queues, named pipes, and shared memory.
 With the exception of using shared memory for mutiple megabytes of
 memory for each of several children, pipes provided equal or better
 performance.

 I think you have a fundamental misunderstanding of the purpose of
 process substitution.  It is a way to turn a pipe to another process
 into a file name, so that you can open what looks like a file,
 
 You mean  (process) is the part you are saying looks like a file?
 
 If it wasn't for the shell mangling eoln's why wouldn't $(process) be the
 same?

OK.  Let's take a step back.  (process) expands to a file name like
/dev/fd/26.  When you open that file for reading, you can read the output
of the process.  If you open it for writing, using the (process) syntax,
you can write to the process's stdin.  This can be asynchronous, not
synchronous like pipes, and, since it can be combined with `exec' and a
redirection, you can save the file descriptor for later use.  In other
words, you can treat dynamically-generated data like a file.  That
filename isn't private to the shell; it can be used as an argument to
another program.

$(process) expands directly to the output of `process'.  Instead of
being able to read and process that data, it appears on the command line
as an expanded word.  You could make `process' output a filename that
the redirection opens, but that's not the same thing as dynamically
generating the data.

 Or are you saying that the shell not mangling eoln's is a way of making it
 look like a filei that you can use the normal set of !!!WRITE!!!...
 calls I can write to such a process?

This doesn't make any sense.

 
 But I never got that  (process) was anything more than a one way read
 from the
 right-hand process into the left -- and was not any different than
 (process)|  where left hand process feeds into the right hand process.
 
 I.e. both looked like unidirectional communication over a pipe.

They are superficially simimlar.  The  redirection is what makes it a
`one-way read'.  That's not the only thing you can do with a process
substitution.

 Both being cases where you can write to a process through a pipe.  I've
 used FIFO's but other than making it a bit easier to parse lines, they
 weren't much different than pipes -- which get parsed into lines all the time
 over the net et al.

Pipes are synchronous and anonymous.  Process substitution creates a
usable filename that you can use to communicate asynchronously with a
process.  They can be made to do the same thing, but are different.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Linda Walsh

In running a startup script, I am endeavoring not to use tmp files
where possible, As part of this, I sent the output of a command
to stdout where I read it using the variable read syntax:

   while read ifname hwaddr; do
   printf ifname=%s, hwaddr=%s\n $ifname $hwaddr
   act_hw2if[$hwaddr]=$ifname
   act_if2hw[$ifname]=$hwaddr
   printf act_hw2if[%s]=%s, act_if2hw[%s]=%s\n 
${act_hw2if[$hwaddr]} ${act_if2hw[$ifname]}

   done $(get_net_IFnames_hwaddrs)

Note, I used the $() form to avoid process substitution -- as I was 
told

on this list that the  form didn't use process substitution.

So I now get:
/etc/init.d/boot.assign_netif_names#192(get_net_IFnames_hwaddrs) 

echo eth5 a0:36:9f:15:c9:c2
/etc/init.d/boot.assign_netif_names: line 203: cannot create temp file 
for here-document: No such file or directory


Where am I using a HERE doc?

More importantly, at this point, where is it trying to write?/(Read).

Simple Q.  Why isn't it using some small fraction of the 90+G of memory
that is free for use at this point?

This really feels like Whack-a-mole





Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Greg Wooledge
On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote:
done $(get_net_IFnames_hwaddrs)

 Where am I using a HERE doc?

 and  both create temporary files.



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Linda Walsh

Greg Wooledge wrote:

On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote:

   done $(get_net_IFnames_hwaddrs)



Where am I using a HERE doc?


 and  both create temporary files.



According to Chet , only way to do a multi-var assignment in bash is

read a b c d  $(echo ${array[@]})

Forcing a simple assignment into using a tmp file seems Machiavellian --
as it does exactly the thing the user is trying to avoid through
unexpected means.

The point of grouping assignments is to save space (in the code) and have
the group initialized at the same time -- and more quickly than using
separate assignments.

So why would someone use a tmp file to do an assignment.

Even the gcc chain is able to use pipe to send the results of one stage
of the compiler to the next without using a tmp.

That's been around for at least 10 years.

So why would a temp file be used?


Creating a tmp file to do an assignment, I assert is a bug.

It is entirely counter-intuitive that such wouldn't use the same mechanism
as LtR ordered pipes.

I.e.

cmd1 | cmd2 -- that hasn't used tmp files on modern *nix systems for
probably 20 years or more (I think DOS was the last shell I knew that used
tmp files...)

so why would cmd2  (cmd1 [|]) not use the same paradigm -- worse, is

cmd1  MEMVAR   -- output is already in memory...

so why would read a b c ${MEMVAR} need a tmp file if the text to be
read is already in memory?




Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Pierre Gaston
On Mon, Oct 6, 2014 at 10:38 PM, Linda Walsh b...@tlinx.org wrote:

 Greg Wooledge wrote:

 On Mon, Oct 06, 2014 at 12:14:57PM -0700, Linda Walsh wrote:

done $(get_net_IFnames_hwaddrs)


  Where am I using a HERE doc?


  and  both create temporary files.



 According to Chet , only way to do a multi-var assignment in bash is

 read a b c d  $(echo ${array[@]})

 Forcing a simple assignment into using a tmp file seems Machiavellian --
 as it does exactly the thing the user is trying to avoid through
 unexpected means.

 The point of grouping assignments is to save space (in the code) and have
 the group initialized at the same time -- and more quickly than using
 separate assignments.

 So why would someone use a tmp file to do an assignment.

 Even the gcc chain is able to use pipe to send the results of one stage
 of the compiler to the next without using a tmp.

 That's been around for at least 10 years.

 So why would a temp file be used?


 Creating a tmp file to do an assignment, I assert is a bug.

 It is entirely counter-intuitive that such wouldn't use the same mechanism
 as LtR ordered pipes.

 I.e.

 cmd1 | cmd2 -- that hasn't used tmp files on modern *nix systems for
 probably 20 years or more (I think DOS was the last shell I knew that used
 tmp files...)

 so why would cmd2  (cmd1 [|]) not use the same paradigm -- worse, is

 cmd1  MEMVAR   -- output is already in memory...

 so why would read a b c ${MEMVAR} need a tmp file if the text to be
 read is already in memory?


 Because it's not a simple assignment, it's using a mechanism to send data
to an external program and another one to read from a stream of data.

Some shell use the buffer of a pipe as an optimization when the amount of
data is small (which is probably the case of most heredocs/string).


Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Greg Wooledge
On Mon, Oct 06, 2014 at 12:38:21PM -0700, Linda Walsh wrote:
 According to Chet , only way to do a multi-var assignment in bash is
 
 read a b c d  $(echo ${array[@]})

The redundant $(echo...) there is pretty bizarre.  Then again, that
whole command is strange.  You have a nice friendly array and you are
assigning the first 3 elements to 3 different scalar variables.  Why?
(The fourth scalar is picking up the remainder, so I assume it's the
rubbish bin.)

Why not simply use the array elements?

 Forcing a simple assignment into using a tmp file seems Machiavellian --
 as it does exactly the thing the user is trying to avoid through
 unexpected means.

You are working in a severely constrained environment.  Thus, you need
to adapt your code to that environment.  This may (often will) mean
you must forsake your usual practices, and fall back to simpler
techniques.

I don't know precisely what your boot script is attempting to do, or
precisely when (during the boot sequence) it runs.  Nor on what operating
system you're doing it (though I'll go out on a limb and guess some
flavor of Linux).

Maybe the best solution here is to move your script to a different part
of the boot sequence.  If you run it after all the local file systems
have been mounted, then you should be able to create temporary files,
which in turns means  and  become available, should you need them.

 The point of grouping assignments is to save space (in the code) and have
 the group initialized at the same time -- and more quickly than using
 separate assignments.

Elegance must be the first sacrifice upon the altar, here.

 So why would someone use a tmp file to do an assignment.

It has nothing to do with assignments.  Temp files are how here documents
(and their cousin here strings) are implemented.  A here document is a
redirection of standard input.  A redirection *from a file*, in fact,
albeit a file that is created on the fly for you by the shell.

 So why would a temp file be used?

Historical reasons, and because many processes will expect standard input
to have random access capabilities (at least the ability to rewind via
lseek).  Since bash has no idea what program is going to be reading the
here document, why take chances?

 Creating a tmp file to do an assignment, I assert is a bug.

You are not *just* doing an assignment.  You are doing a redirection.
Furthermore, you have not even demosntrated the actual reason for the
here document yet.

 cmd1 | cmd2 -- that hasn't used tmp files on modern *nix systems for
 probably 20 years or more (I think DOS was the last shell I knew that used
 tmp files...)

Correct.  Pipelines do not use temp files.

 so why would cmd2  (cmd1 [|]) not use the same paradigm -- worse, is

Process substitution uses either a named pipe, or a /dev/fd/* file system
entry, or a temp file, depending on the platform.

 cmd1  MEMVAR   -- output is already in memory...

Now you're just making things up.  That isn't bash syntax.

 so why would read a b c ${MEMVAR} need a tmp file if the text to be
 read is already in memory?

Because you are using generalized redirection syntax that is intended
for use with *any* command, not just shell builtins.

Your example has now changed, from   read a b c d  ${array[*]}
to   read a b c  $scalar

I don't know which one your *real* script is using, so it's hard to
advise you.

If you want to replace the former, I would suggest this:

a=${array[0]}
b=${array[1]}
c=${array[2]}
d=${array[*]:3}

If you want to replace the latter, I would suggest something like this:

shopt -s extglob
temp=${scalar//+([[:space:]])/ }
a=${temp%% *}
temp=${temp#* }
b=${temp%% *}
c=${temp#* }

This may be overkill, since the consolidation of whitespace might not
be necessary.  It's impossible to tell since you have not shown the
actual input, or the actual desired output, or even used actual variable
names in your examples.



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Linda Walsh



Greg Wooledge wrote:

On Mon, Oct 06, 2014 at 12:38:21PM -0700, Linda Walsh wrote:

According to Chet , only way to do a multi-var assignment in bash is

read a b c d  $(echo ${array[@]})


The redundant $(echo...) there is pretty bizarre.  Then again, that
whole command is strange.  You have a nice friendly array and you are
assigning the first 3 elements to 3 different scalar variables.  Why?
(The fourth scalar is picking up the remainder, so I assume it's the
rubbish bin.)

Why not simply use the array elements?


Forcing a simple assignment into using a tmp file seems Machiavellian --
as it does exactly the thing the user is trying to avoid through
unexpected means.


You are working in a severely constrained environment. 

That isn't the problem:  the assignment using a tmp file is:

strace -p 48785 -ff

Process 48785 attached
read(0, \r, 1)= 1
write(2, \n, 1)   = 1
socket(PF_NETLINK, SOCK_RAW, 9) = 3
sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=}, 
msg_iov(2)=[{*\0\0\0d\4\1\0\0\0\0\0\0\0\0\0, 16}, {read a b c d 
${arr[@]}\0, 26}], msg_controllen=0, msg_flags=0}, 0) = 42

close(3)= 0
-
Um... it used a socket.. to transfer it, then it uses a tmp file on top
of that?!  :

rt_sigaction(SIGINT, {0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 
{0x4320b1, [], SA_RESTORER|SA_RESTART, 0x30020358d0}, 8) = 0

open(/tmp/sh-thd-110678907923, O_WRONLY|O_CREAT|O_EXCL|O_TRUNC, 0600) = 3
write(3, one two three four, 18)  = 18
write(3, \n, 1)   = 1
open(/tmp/sh-thd-110678907923, O_RDONLY) = 4
close(3)= 0
unlink(/tmp/sh-thd-110678907923)  = 0
fcntl(0, F_GETFD)   = 0
fcntl(0, F_DUPFD, 10)   = 10
fcntl(0, F_GETFD)   = 0
fcntl(10, F_SETFD, FD_CLOEXEC)  = 0
dup2(4, 0)  = 0
close(4)= 0
ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 
0x7fff85627820) = -1 ENOTTY (Inappropriate ioctl for device)

lseek(0, 0, SEEK_CUR)   = 0
read(0, one two three four\n, 128)= 19
dup2(10, 0) = 0
fcntl(10, F_GETFD)  = 0x1 (flags FD_CLOEXEC)
close(10)   = 0
-

Why in gods name would it use a socket (still of arguable overhead, when
it could be done in a local lib), but THEN it duplicates the i/o in a file?


Thus, you need
to adapt your code to that environment.  This may (often will) mean
you must forsake your usual practices, and fall back to simpler
techniques.


The above is under a normal environment.  It's still broken.






Maybe the best solution here is to move your script to a different part
of the boot sequence.  If you run it after all the local file systems
have been mounted, then you should be able to create temporary files,
which in turns means  and  become available, should you need them.


Theoretically, they ARE mounted What I think may be happening
is that $TMP is not set so it is trying to open the tmp dir in:

//sh-thd-183928392381 -- a network address.



Elegance must be the first sacrifice upon the altar, here.

---
Correctness before elegance.  1), use memory before OS services.
2) use in-memory services before file services, 3) Don't use uninitialized
variables (TMP) -- verify that they are sane values before usage.
4) don't use network for tmp when /tmp or /var/tmp would have worked just
fine.






So why would someone use a tmp file to do an assignment.


It has nothing to do with assignments.  Temp files are how here documents
(and their cousin here strings) are implemented.  A here document is a
redirection of standard input.  A redirection *from a file*


Nope:
 This type of redirection instructs the shell to  read  input  from  the
   current source until a line containing only delimiter (with no trailing
   blanks) is seen.  All of the lines read up to that point are then  used
   as the standard input for a command.

The current source -- can be anything that input can be
redirected from -- including memory.



in fact,
albeit a file that is created on the fly for you by the shell.


Gratuitous expectation of slow resources...  Non-conservation
of resources, not for the user, but for itself.

Note:

a=$(foo)
echo $a

one
two
three
---
no tmp files used, but it does a file read on foo.

b=$a
-- the above uses no tmp files.

b=$(echo $a)
---
THAT uses no tmp files.

But
b=$a

uses a tmp file.

That's ridiculously lame.








So why would a temp file be used?


Historical reasons, and because many processes will expect standard input
to have random access capabilities (at least the ability to rewind via
lseek).  Since bash has no idea what program is going to be reading the
here document, why take chances?


How 

Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Greg Wooledge
On Mon, Oct 06, 2014 at 02:00:47PM -0700, Linda Walsh wrote:
 How much of the original do you want?... wait... um...
 But the point it it should already work... I think it is trying to read
 from the network.

In the code below, you have a function that *generates* a stream of
data out of variables that it has directly in memory ($nm) and
the contents of files that it reads from disk ($($nm/address)).

Then you feed this output to something else that tries to parse the
fields back *out* of the stream of data, to assign them to variables
inside a loop, so that it can operate on the separate variables.

You're just making your life harder than it has to be.  Combine these
two blobs of code together.  Iterate directly over the variables as
you get them, instead of doing this serialization/deserialization step.

netdev_pat=... # (and other variable assignments)
cd $sysnet 
for ifname in ...; do
hwaddr=$($ifname/address)
act_hw2if[$hwaddr]=$ifname
act_if2hw[$ifname]=$hwaddr
done

On a side note, you are going out of your way to make your bash code look
like perl (alias my=declare and so on).  This is a bad idea.  It means
you are putting round pegs in square holes.  It's best to treat each
language as its own language, with its own style and its own strengths
and weaknesses.  All these extra layers you are introducing just make
things worse, not better.


 shopt expand_aliases
 alias dcl=declare sub=function
 alias int=dcl\ -i map=dcl\ -A   hash=dcl\ -Aarray=dcl\ -a
 alias lower=dcl\ -l   upper=dcl\ -u string=dcl  my=dcl
 sub get_net_IFnames_hwaddrs () {# get names + addrs from /sys
   vrfy_drivers
   array pseudo_devs=(br bond ifb team)
   string pseudo_RE=^+(${pseudo_devs[@]})$
   pseudo_RE=${pseudo_RE// /|}
   string netdev_pat=+([_0-9a-z])+([0-9])
   sysnet=/sys/class/net
   ( cd $sysnet 
 for nm in $( printf %s\n $netdev_pat | grep -Pv $pseudo_RE); do
   echo $nm $($nm/address)
 done )
 }
 
 map act_hw2if   #actual values (to be read in)
 map act_if2hw
 map XIF #tmp array to hold exchanged IF's
 
 sub read_actuals () {   # parse output stream from above
   my ifname hwaddr
   while read ifname hwaddr; do
 act_hw2if[$hwaddr]=$ifname
 act_if2hw[$ifname]=$hwaddr
   done $(get_net_IFnames_hwaddrs)
 }



Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Linda Walsh



Greg Wooledge wrote:

On Mon, Oct 06, 2014 at 02:00:47PM -0700, Linda Walsh wrote:

How much of the original do you want?... wait... um...
But the point it it should already work... I think it is trying to read
from the network.


In the code below, you have a function that *generates* a stream of
data out of variables that it has directly in memory ($nm) and
the contents of files that it reads from disk ($($nm/address)).

---
That's because the 1st function parsed the output of
a command like 'ip'... it has later been optimized to not call it
but use /sys directly... BUT, there is no guarantee that I won't
have to go back to the other format for some reason.

When I split things up it's because of NOT wanting
to take shortcuts.



On a side note, you are going out of your way to make your bash code look
like perl (alias my=declare and so on).  This is a bad idea.

---
I don't think it is perl.  I think it is something that
saves me on typing and increases clarity

not perl:shell equiv
arraydeclare -a
map  declare -A
hash declare -A
int  declare -i
string   declare

I threw in 'my' because it is a hell of alot
shorter than declare.



This is a bad idea It means you are putting round pegs in square holes.


Extra layers?  Where?
I don't throw in extra layers to make things look like perl
I throw them in to make the scripts maintainable.  I used to code
in the same fashion as all the shell books... It was unmaintainable
and illegible.  FWIW, I tried to move perl toward more legibility
and that went over less well than an icecube's chance in hell.

For them, being forced to upgrade to a POSIX standard would be

an improvement.   (and you know how I feel PSX is a LowestCD) -- a
barely passing grade, like a D.

While trying to do that with shell is, admittedly trying to put round pegs
in square holes -- one does the best one can do with the tools available.

Trying to focus on the whys of my algorithms and style when
you and others have indicted no desire to see the whole context in the
past, seems a bit like telling me I'm not watching the show the right
way and I should ignore the man over there behind the curtain.




Re: bash uses tmp files for inter-process communication instead of pipes?

2014-10-06 Thread Linda Walsh



Greg Wooledge wrote:

netdev_pat=... # (and other variable assignments)
(cd $sysnet 
for ifname in ...; do
hwaddr=$($ifname/address)
act_hw2if[$hwaddr]=$ifname
act_if2hw[$ifname]=$hwaddr
done)


Except that either act_hw2if + pair were just assigned to
in the sub process that was meant to isolate the change
of directory from the rest of the program,
OR
we aren't in the right directory when we do the reads.

Either way, they code as you have suggested won't work
without overcoming another set of side effects.