On Oct 9, 2010, at 1:25 PM, Garrett Cooper wrote:

> On Wed, Oct 6, 2010 at 8:29 PM, Devin Teske <dte...@vicor.com> wrote:
>> 
>> On Oct 6, 2010, at 4:09 PM, Brandon Gooch wrote:
>> 
>>> On Wed, Oct 6, 2010 at 3:45 PM, Devin Teske <dte...@vicor.com> wrote:
>>>> Hello fellow freebsd-hackers,
>>>> 
>>>> Long-time hacker, first-time poster.
>>>> 
>>>> I'd like to share a shell script that I wrote for FreeBSD system
>>>> administration.
>>>> 
>>> 
>>> It seems the list ate the attachment :(
>> 
>> 
>> Here she is ^_^ Comments welcome.
> 
> Hah. More nuclear reactor than bikeshed :D!

^_^ You're about to find out more...

> 
>> #!/bin/sh
>> # -*- tab-width:  4 -*- ;; Emacs
>> # vi: set tabstop=4     :: Vi/ViM
> 
>> #
>> # Default setting whether to dump a list of internal dependencies upon exit
>> #
>> : ${SYSRC_SHOW_DEPS:=0}
>> 
>> ############################################################ GLOBALS
>> 
>> # Global exit status variables
>> : ${SUCCESS:=0}
>> : ${FAILURE:=1}
> 
> Should this really be set to something other than 0 or 1 by the
> end-user's environment? This would simplify a lot of return/exit
> calls...

A scenario that I envision that almost never arises, but...

Say someone wanted to call my script but wanted to mask it to always return 
with success (why? I dunno... it's conceivable though).

Example: (this should be considered ugly -- because it is)

FAILURE=0 && sysrc foo && reboot

Efficacy:
The `reboot' rvalue of '&&' will always execute because FAILURE.

I don't really know why I got into the practice of writing scripts this way... 
most likely a foregone conclusion that seemed like a good idea at one time but 
never really amounted to anything substantive (in fact, it should perhaps be 
considered heinous).

I agree... a productionized version in the base distribution should lack such 
oddities. The script should do:

SUCCESS=0
FAILURE=1

and be done with it.

Though, I've been sometimes known to follow the habits of C-programming and 
instead do:

EXIT_SUCCESS=0
EXIT_FAILURE=1

(real macros defined by system includes; though in C-land they aren't 0/1 but 
rather 0/-1 IIRC)

I just found it redundant to say:

exit $EXIT_SUCCESS

and shorter/more-succinct to say:

exit $SUCCESS


>> #
>> # Program name
>> #
>> progname="${0##*/}"
>> 
>> #
>> # Options
>> #
>> SHOW_EQUALS=
>> SHOW_NAME=1
>> 
>> # Reserved for internal use
>> _depend=
> 
> When documenting arguments passed to functions, I usually do something like:
> 
> # 1 - a var
> # 2 - another var
> #
> # ... etc
> 
> because it's easier to follow for me at least.
> 
> Various spots in the codebase have differing styles though (and it
> would be better to follow the style in /etc/rc.subr, et all for
> consistency, because this tool is a consumer of those APIs).

I borrow my argument-documentation style from 15+ years of perl programming. I 
think it's all just personal preference. Personally, I like to jam it all one 
line specifically so that I can do a quick mark, then "?function.*name" to jump 
up to the definition-line, "yy" (for yank-yank; copies current line into 
buffer), then jump back to my mark, "p" for paste, then replace the variables 
with what I intend to pass in for the particular call.

Using vi for years teaches interesting styles -- packing a list of keywords 
onto a single line to grab/paste elsewhere are just one of those little things 
you learn.


>> ############################################################ FUNCTION
>> 
>> # fprintf $fd $fmt [ $opts ... ]
>> #
>> # Like printf, except allows you to print to a specific file-descriptor. 
>> Useful
>> # for printing to stderr (fd=2) or some other known file-descriptor.
>> #
>> : dependency checks performed after depend-function declaration
>> : function ; fprintf ( ) # $fd $fmt [ $opts ... ]
> 
> Dumb question. Does declaring `: dependency checks performed after
> depend-function declaration' and `: function' buy you anything other
> than readability via comments with syntax checking?

The first ": dependency checks ..." is just a note to myself. I used ":" syntax 
to make it stand-out differently than the "#" syntax. Not to mention that when 
I go through my scripts (well, the ones that are intended for functioning 
within an embedded environment at least) I expect to see a call to "depend()" 
before a) each/every function and b) each/every large contiguous block of code 
(or at least the blocks that look like they are good candidates for re-use in 
other scripts).

The second usage (": function") aids in finding the function declaration among 
the usages. See, in Perl, I can simply search for "sub" preceding the function 
name. In C, I tend to split the return type from the function name and ensure 
that the function name always starts in column-1 so I can search for 
"^funcname" to go to the declaration opposed to the usages/references. In BASH, 
`function' is a valid keyword and you can say "function funcname ( ) BLOCK" but 
unfortunately in good ol' bourne shell, "function" is not an understood 
keyword, ... but really liking this keyword, I decided to make use of it in 
bourne shell by-way-of simply making it a non-executed-expression (preceded it 
with ":" and terminated it with ";").



>> {
>>        local fd=$1
>>        [ $# -gt 1 ] || return ${FAILURE-1}
> 
> While working at IronPort, Doug (my tech lead) has convinced me that
> constructs like:
> 
> if [ $# -le 1 ]
> then
>    return ${FAILURE-1}
> fi

Never did understand why folks insisted on splitting the if/then syntax (or 
while/do or for/do etc.) into multiple lines. I've always found that putting 
the semi-colon in there made it easier to read.


> Are a little more consistent and easier to follow than:
> 
> [ $# -gt 1 ] || return ${FAILURE-1}
> 
> Because some folks have a tendency to chain shell expressions, i.e.

I agree with you that any-more than one is excessive.

I've often tried to emulate the C-expression "bool ? if-true : else" using:

( bool && if-true ) || else

but it's just not clean-looking.

I still like the simple-elegance of "expr || if-false" and "expr && if-true" 
... but-again, only perhaps since my first-love is Perl (of which I've 
programmed 15+ years), and statements like that are rampant in Perl perhaps 
because the ol' Perl cookbooks of historical right advocate their usage in such 
a manner.


> 
> expr1 || expr2 && expr3
> 
> Instead of:
> 
> if expr1 || expr2
> then
>    expr3
> fi
> 
> or...
> 
> if ! expr1
> then
>    expr2
> fi
> if [ $? -eq 0 ]
> then
>    expr3
> fi
> 
> I've caught myself chaining 3 expressions together, and I broke that
> down into a simpler (but more longhand format), but I've caught people
> chaining 4+ expressions together, which just becomes unmanageable to
> follow (and sometimes bugs creep in because of operator ordering and
> expression evaluation and subshells, etc, but that's another topic for
> another time :)..).

Yeah, 3+ is gross in my opinion (agreed).


> 
>>        shift 1
>>        printf "$@" >&$fd
>> }
>> 
>> # eprintf $fmt [ $opts ... ]
>> #
>> # Print a message to stderr (fd=2).
>> #
>> : dependency checks performed after depend-function declaration
>> : function ; eprintf ( ) # $fmt [ $opts ... ]
>> {
>>        fprintf 2 "$@"
>> }
>> 
>> # show_deps
>> #
>> # Print the current list of dependencies.
>> #
>> : dependency checks performed after depend-function declaration
>> : function ; show_deps ( ) #
>> {
>>        if [ "$SYSRC_SHOW_DEPS" = "1" ]; then
>>                eprintf "Running internal dependency list:\n"
>> 
>>                local d
>>                for d in $_depend; do
>>                        eprintf "\t%-15ss%s\n" "$d" "$( type "$d" )"
> 
> The command(1) -v builtin is more portable than the type(1) builtin
> for command existence lookups (it just doesn't tell you what the
> particular item is that you're dealing with like type(1) does).
> 

Ah, coolness. command(1) is new to me just now ^_^


> I just learned that it also handles other builtin lexicon like if,
> for, while, then, do, done, etc on FreeBSD at least; POSIX also
> declares that it needs to support that though, so I think it's a safe
> assumption to state that command -v will provide you with what you
> need.

I originally had been programming in tests for '!' and 'in', but in POSIX 
bourne-shell, they aren't defined (though understood) in the keyword table (so 
type(1) balks in bourne-shell while csh and bash do respond to '!' and 'in' 
queries).

Since you've pointed out command(1)... I now have a way of checking '!'. Though 
unfortunately, "command -v", like type(1), also does not like "in" (in 
bourne-shell at least).


> 
>>                done
>>        fi
>> }
>> 
>> # die [ $err_msg ... ]
>> #
>> # Optionally print a message to stderr before exiting with failure status.
>> #
>> : dependency checks performed after depend-function declaration
>> : function ; die ( ) # [ $err_msg ... ]
>> {
>>        local fmt="$1"
>>        [ $# -gt 0 ] && shift 1
>>        [  "$fmt"  ] && eprintf "$fmt\n" "$@"
> 
> "x$fmt" != x ? It seems like it could be simplified to:
> 
> if [ $# -gt 0 ]
> then
>    local fmt=$1
>    shift 1
>    eprintf "$fmt\n" "$@"
> fi

I never understood why people don't trust the tools they are using...

`[' is very very similar (if not identical) to test(1)

[ "..." ] is the same thing as [ -n "..." ] or test -n "..."
[ ! "..." ] is the same things as [ -z "..." ] or test -z "..."

I'll never understand why people have to throw an extra letter in there and 
then compare it to that letter.

If the variable expands to nothing, go ahead and let it. I've traced every 
possible expansion of variables when used in the following manner:

[ "$VAR" ] ...

and it never fails. If $VAR is anything but null, the entire expression will 
evaluate to true.

Again... coming from 15+ years of perl has made my eyes read the following 
block of code:

        if [ "$the_network_is_enabled" ]; then

aloud in my head as "if the network is enabled, then ..." (not too far of a 
stretch)... which has a sort of quintessential humanized logic to it, don't you 
think?

Now, contrast that with this block:

        if [ "x$the_network_is_enabled" = x ]; then

(one might verbalize that in their head as "if x plus `the network is enabled' 
is equal to x, then" ... which is more clear?)

Yet, if I don't leave out the implied "-n" or "-z", is it more acceptable? For 
instance...

        if [ -n "$the_network_is_enabled" ]; then

But that would require the reader (performing intonation in their heads as they 
read the code) to innately _know_ that "-n" is "this is non-null" (where "this" 
is the rvalue to the keyword).


> 
>>        show_deps
>>        exit ${FAILURE-1}
>> }
>> 
>> # have $anything
>> #
>> # Used for dependency calculations. Arguments are passed to the `type' 
>> built-in
>> # to determine if a given command is available (either as a shell built-in or
>> # as an external binary). The return status is true if the given argument is
>> # for an existing built-in or executable, otherwise false.
>> #
>> # This is a convenient method for building dependency lists and it aids in 
>> the
>> # readability of a script. For example,
>> #
>> #       Example 1: have sed || die "sed is missing"
>> #       Example 2: if have awk; then
>> #                       # We have awk...
>> #                  else
>> #                       # We DON'T have awk...
>> #                  fi
>> #       Example 3: have reboot && reboot
>> #
>> : dependency checks performed after depend-function declaration
>> : function ; have ( ) # $anything
>> {
>>        type "$@" > /dev/null 2>&1
>> }
>> 
>> # depend $name [ $dependency ... ]
>> #
>> # Add a dependency. Die with error if dependency is not met.
>> #
>> : dependency checks performed after depend-function declaration
>> : function ; depend ( ) # $name [ $dependency ... ]
>> {
>>        local by="$1" arg
>>        shift 1
>> 
>>        for arg in "$@"; do
>>                local d
> 
> Wouldn't it be better to declare this outside of the loop (I'm not
> sure how optimal it is to place it inside the loop)?

I'm assuming you mean the "local d" statement. There's no restriction that says 
you have to put your variable declarations at the beginning of a block (like in 
C -- even if only within a superficial block { in the middle of nowhere } ... 
like that).

Meanwhile, in Perl, it's quite a difference to scope it to the loop rather than 
the block. So, it all depends on whichever _looks_ nicer to you ^_^



>>                for d in $_depend ""; do
>>                        [ "$d" = "$arg" ] && break
>>                done
>>                if [ ! "$d" ]; then
> 
> Could you make this ` "x$d" = x ' instead?

=(

I made the switch to using [ "..." ] (implied "-n") and [ ! "..." ] (implied 
"-z") long ago because they intonate in my head so-darned well ("!" becoming 
"NOT" of course).


>>                        have "$arg" || die \
>>                                "%s: Missing dependency '%s' required by %s" \
>>                                "${progname:-$0}" "$arg" "$by"
> 
> The $0 substitution is unnecessary based on how you set progname above:
> 
> $ foo=yadda
> $ echo ${foo##*/}
> yadda
> $ foo=yadda/badda/bing/bang
> $ echo ${foo##*/}
> bang

Ah, another oddity of my programming style.

I often experienced people ripping whole blocks or whole functions out of my 
scripts and re-using them in their own scripts...

So I adopted this coding practice where... whenever I anticipated people doing 
this (usually I only anticipate people ripping whole functions), I wanted the 
blocks of code to still be semi-functional.

So what you're seeing is that everytime I rely on the global "progname" within 
a re-usable code construct (a function for example), I would use special 
parameter-expansion syntaxes that allow a fall-back default value that was 
sensible ($0 in this case).

So outside of functions within the script, you'll see:

        $progname

-- the global is used explicitly without fallback (because people ripping out a 
block in the main source should be smart enough to know to check the globals 
section at the top)

meanwhile, in a function:

        ${progname:-$0}

So that if they ripped said-function into their own code and neglected to 
define progname, the fallback default would be $0 which is expanded by the 
shell always to be the first word (words being separated by any character of 
$IFS) of the invocation line.



> 
>>                        _depend="$_depend${_depend:+ }$arg"
>>                fi
>>        done
>> }
>> 
>> #
>> # Perform dependency calculations for above rudimentary functions.
>> # NOTE: Beyond this point, use the depend-function BEFORE dependency-use
>> #
>> depend fprintf   'local' '[' 'return' 'shift' 'printf'
>> depend eprintf   'fprintf'
>> depend show_deps 'if' '[' 'then' 'eprintf' 'local' 'for' 'do' 'done' 'fi'
>> depend die       'local' '[' 'shift' 'eprintf' 'show_deps' 'exit'
>> depend have      'local' 'type' 'return'
>> depend depend    'local' 'shift' 'for' 'do' '[' 'break' 'done' 'if' 'then' \
>>                 'have' 'die' 'fi'
> 
> I'd say that you have bigger fish to try if your shell lacks the
> needed lexicon to parse built-ins like for, do, local, etc :)...

Too true...

I was being ULTRA pedantic in my embedded-environment testing. ^_^

Taking measures to test with different shells even... sh, bash, csh, pdksh, 
zsh, etc. etc. etc. (glad to report that the script is ultra portable)



>> # usage
>> #
>> # Prints a short syntax statement and exits.
>> #
>> depend usage 'local' 'eprintf' 'die'
>> : function ; usage ( ) #
>> {
>>        local optfmt="\t%-12s%s\n"
>>        local envfmt="\t%-22s%s\n"
>> 
>>        eprintf "Usage: %s [OPTIONS] name[=value] ...\n" "${progname:-$0}"
>> 
>>        eprintf "OPTIONS:\n"
>>        eprintf "$optfmt" "-h --help" \
>>                "Print this message to stderr and exit."
>>        eprintf "$optfmt" "-d" \
>>                "Print list of internal dependencies before exit."
>>        eprintf "$optfmt" "-e" \
>>                "Print query results as \`var=value' (useful for producing"
>>        eprintf "$optfmt" "" \
>>                "output to be fed back in). Ignored if -n is specified."
>>        eprintf "$optfmt" "-n" \
>>                "Show only variable values, not their names."
>>        eprintf "\n"
>> 
>>        eprintf "ENVIRONMENT:\n"
>>        eprintf "$envfmt" "SYSRC_SHOW_DEPS" \
>>                "Dump list of dependencies. Must be zero or one"
>>        eprintf "$envfmt" "" \
>>                "(default: \`0')"
>>        eprintf "$envfmt" "RC_DEFAULTS" \
>>                "Location of \`/etc/defaults/rc.conf' file."
>> 
>>        die
>> }
>> 
>> # sysrc $setting
>> #
>> # Get a system configuration setting from the collection of system-
>> # configuration files (in order: /etc/defaults/rc.conf /etc/rc.conf
>> # and /etc/rc.conf).
>> #
>> # Examples:
>> #
>> #       sysrc sshd_enable
>> #               returns YES or NO
>> #       sysrc defaultrouter
>> #               returns IP address of default router (if configured)
>> #       sysrc 'hostname%%.*'
>> #               returns $hostname up to (but not including) first `.'
>> #       sysrc 'network_interfaces%%[$IFS]*'
>> #               returns first word of $network_interfaces
>> #       sysrc 'ntpdate_flags##*[$IFS]'
>> #               returns last word of $ntpdate_flags (time server address)
>> #       sysrc usbd_flags-"default"
>> #               returns $usbd_flags or "default" if unset
>> #       sysrc usbd_flags:-"default"
>> #               returns $usbd_flags or "default" if unset or NULL
>> #       sysrc cloned_interfaces+"alternate"
>> #               returns "alternate" if $cloned_interfaces is set
>> #       sysrc cloned_interfaces:+"alternate"
>> #               returns "alternate" if $cloned_interfaces is set and non-NULL
>> #       sysrc '#kern_securelevel'
>> #               returns length in characters of $kern_securelevel
>> #       sysrc 'hostname?'
>> #               returns NULL and error status 2 if $hostname is unset (or if
>> #               set, returns the value of $hostname with no error status)
>> #       sysrc 'hostname:?'
>> #               returns NULL and error status 2 if $hostname is unset or NULL
>> #               (or if set and non-NULL, returns value without error status)
>> #
> 
> I would probably just point someone to a shell manual, as available
> options and behavior may change, and behavior shouldn't (but
> potentially could) vary between versions of FreeBSD.

I just checked "man 1 sh" on FreeBSD-8.1, and it did have copious documentation 
on special expansion syntaxes. (beautiful!)... so you're right, we could just 
point them at a sh(1) man-page.

I somehow had it ingrained in my mind that the sh(1) man-page was lacking while 
the bash(1) info-tex pages were the only places to find documentation on the 
special expansion syntaxes. I'm glad to see they are fully documented in 
FreeBSD these days (even back to 4.11 which I checked just now).


>> depend sysrc 'local' '[' 'return' '.' 'have' 'eval' 'echo'
>> : function ; sysrc ( ) # $varname
>> {
>>        : ${RC_DEFAULTS:="/etc/defaults/rc.conf"}
>> 
>>        local defaults="$RC_DEFAULTS"
>>        local varname="$1"
>> 
>>        # Check arguments
>>        [ -r "$defaults" ] || return
>>        [ "$varname" ] || return
>> 
>>        ( # Execute within sub-shell to protect parent environment
>>                [ -f "$defaults" -a -r "$defaults" ] && . "$defaults"
>>                have source_rc_confs && source_rc_confs
>>                eval echo '"${'"$varname"'}"' 2> /dev/null
>>        )
>> }
>> 
>> # ... | lrev
>> # lrev $file ...
>> #
>> # Reverse lines of input. Unlike rev(1) which reverses the ordering of
>> # characters on a single line, this function instead reverses the line
>> # sequencing.
>> #
>> # For example, the following input:
>> #
>> #       Line 1
>> #       Line 2
>> #       Line 3
>> #
>> # Becomes reversed in the following manner:
>> #
>> #       Line 3
>> #       Line 2
>> #       Line 1
>> #
>> depend lrev 'local' 'if' '[' 'then' 'while' 'do' 'shift' 'done' 'else' 
>> 'read' \
>>            'fi' 'echo'
>> : function ; lrev ( ) # $file ...
>> {
>>        local stdin_rev=
>>        if [ $# -gt 0 ]; then
>>                #
>>                # Reverse lines from files passed as positional arguments.
>>                #
>>                while [ $# -gt 0 ]; do
>>                        local file="$1"
>>                        [ -f "$file" ] && lrev < "$file"
>>                        shift 1
>>                done
>>        else
>>                #
>>                # Reverse lines from standard input
>>                #
>>                while read -r LINE; do
>>                        stdin_rev="$LINE
>> $stdin_rev"
>>                done
>>        fi
>> 
>>        echo -n "$stdin_rev"
>> }
>> 
>> # sysrc_set $setting $new_value
>> #
>> # Change a setting in the system configuration files (edits the files 
>> in-place
>> # to change the value in the last assignment to the variable). If the 
>> variable
>> # does not appear in the source file, it is appended to the end of the 
>> primary
>> # system configuration file `/etc/rc.conf'.
>> #
>> depend sysrc_set 'local' 'sysrc' '[' 'return' 'for' 'do' 'done' 'if' 'have' \
>>                 'then' 'else' 'while' 'read' 'case' 'esac' 'fi' 'break' \
>>                 'eprintf' 'echo' 'lrev'
>> : function ; sysrc_set ( ) # $varname $new_value
>> {
>>        local rc_conf_files="$( sysrc rc_conf_files )"
>>        local varname="$1" new_value="$2"
> 
> IIRC I've run into issues doing something similar to this in the past,
> so I broke up the local declarations on 2+ lines.

I find that the issue is only when you do something funky where you need to 
know the return status after the assignment. `local' will always return with 
success, so if you need to test the error status after an assignment with 
local, you'll never get it. In those cases, it's best to use local just to 
define the variable and then assign in another step to which you can get the 
return error status of the command executed within.

For example:

local foo="$( some command )"
if [ $? -ne 0 ]; then
...

will never fire because local always returns true.

Meanwhile,...

local foo
foo="$( some command )"
if [ $? -ne 0 ]; then
...

will work as expected (if "some command" returns error status, then the 
if-block will fire).


>>        local file conf_files=
>> 
>>        # Check arguments
>>        [ "$rc_conf_files" ] || return ${FAILURE-1}
>>        [ "$varname" ] || return ${FAILURE-1}
>> 
> 
> Why not just do...
> 
> if [ "x$rc_conf_files" = x -o "x$varname" = x ]
> then
>    return ${FAILURE-1}
> fi

I think you'll find (quite pleasantly) that if you intonate the lines...

        "rc_conf_files [is non-null] OR return failure"
        "varname [is non-null] OR return failure"

Sounds a lot better/cleaner than the intonation of the suggested replacement:

        "if x plus rc_conf_files expands to something that is not equal to x OR 
x plus the expansion of varname is not x then return failure"

Not to mention that if the checking of additional arguments is required, a 
single new line of similar appearance is added... whereas if you wanted to 
expand the suggested replacement to handle another argument, you'd have to add 
another "-o" case to the "[ ... ]" block which causes the line to be pushed 
further to the right, requiring something like one of the two following 
solutions:

        if [ "x$rc_conf_files" = x -o "x$varname" = x -o "x$third" = x ]
        then
        ...

or (slightly better)

        if [ "x$rc_conf_files" = x -o \
             "x$varname" = x -o \
             "x$third" = x ]
        then
        ...

But then again... you're lacking something very importantant in both of those 
that you don't get with the original syntax ([ "$blah" ] || return ...)... 
clean diff outputs! and clean CVS differentials... and clean RCS...

Let's say that the sanity checks need to be expanded to test yet-another 
variable. In the original syntax, the diff would be one line:

+       [ "$third" ] || return ${FAILURE-1}

Otherwise, the diff is uglier (in my humble opinion):

-       if [ "x$rc_conf_files" = x -o "x$varname" = x ]
+       if [ "x$rc_conf_files" = x -o "x$varname" = x -o "x$third" = x ]

Make sense?

I think looking at CVS diffs where only a single line is added to check a new 
variable is much cleaner than a code-block which must be erased and rewritten 
everytime the test is expanded.



> 
> ...?
> 
>>        # Reverse the order of files in rc_conf_files
>>        for file in $rc_conf_files; do
>>                conf_files="$file${conf_files:+ }$conf_files"
>>        done
>> 
>>        #
>>        # Determine which file we are to operate on. If no files match, we'll
>>        # simply append to the last file in the list (`/etc/rc.conf').
>>        #
>>        local found=
>>        local regex="^[[:space:]]*$varname="
>>        for file in $conf_files; do
>>                #if have grep; then
>>                if false; then
>>                        grep -q "$regex" $file && found=1
> 
> Probably want to redirect stderr for the grep output to /dev/null, or
> test for the file's existence first, because rc_conf_files doesn't
> check for whether or not the file exists which would result in noise
> from your script:
> 
> $ . /etc/defaults/rc.conf
> $ echo $rc_conf_files
> /etc/rc.conf /etc/rc.conf.local
> $ grep -q foo /etc/rc.local
> grep: /etc/rc.local: No such file or directory

Good catch! I missed that ^_^



> 
>>                else
>>                        while read LINE; do \
>>                                case "$LINE" in \
>>                                $varname=*) found=1;; \
>>                                esac; \
>>                        done < $file
>>                fi
>>                [ "$found" ] && break
>>        done
>> 
>>        #
>>        # Perform sanity checks.
>>        #
>>        if [ ! -w $file ]; then
>>                eprintf "\n%s: cannot create %s: permission denied\n" \
> 
> Being pedantic, I would capitalize the P in permission to match
> EACCES's output string.

But, I actually copied the error verbatim from what the shell produces if you 
actually try the command.

So... if you remove the check (if [ ! -w $file ] ... ... ...) and try the 
script as non-root, you'll get exactly that error message (with lower-case 'p' 
on 'permission denied').

It wouldn't make sense for my script to use upper-case 'P' unless the 
bourne-shell is patched to do the same.

I'm simply fundamentally producing the same error message as the shell safe for 
one difference... I try to detect the error before running into it simply so I 
can throw a spurious newline before the error... causing the output to more 
accurately mimick what sysctl(8) produces in the same exact case (the case 
where a non-root user with insufficient privileges tries to modify an MIB). 
Give it a shot...

$ sysctl security.jail.set_hostname_allowed=1
security.jail.set_hostname_allowed: 1
sysctl: security.jail.set_hostname_allowed: Operation not permitted

If I don't test for lack of write permissions first, and throw the error out 
with a preceding new-line, the result would be:

$ sysrc foo=bar
foo: barsysrc: cannot create /etc/rc.conf: permission denied

Rather than:

$sysrc foo=bar
foo: bar
sysrc: cannot create /etc/rc.conf: permission denied


> 
>>                        "${progname:-$0}" "$file"
>>                return ${FAILURE-1}
>>        fi
>> 
>>        #
>>        # If not found, append new value to last file and return.
>>        #
>>        if [ ! "$found" ]; then
>>                echo "$varname=\"$new_value\"" >> $file
>>                return ${SUCCESS-0}
>>        fi
>> 
>>        #
>>        # Operate on the matching file, replacing only the last occurrence.
>>        #
>>        local new_contents="`lrev $file 2> /dev/null | \
>>        ( found=
>>          while read -r LINE; do
>>                if [ ! "$found" ]; then
>>                        #if have grep; then
>>                        if false; then
>>                                match="$( echo "$LINE" | grep "$regex" )"
>>                        else
>>                                case "$LINE" in
>>                                $varname=*) match=1;;
>>                                         *) match=;;
>>                                esac
>>                        fi
>>                        if [ "$match" ]; then
>>                                LINE="$varname"'="'"$new_value"'"'
>>                                found=1
>>                        fi
>>                fi
>>                echo "$LINE"
>>          done
>>        ) | lrev`"
>> 
>>        [ "$new_contents" ] \
>>                && echo "$new_contents" > $file
> 
> What if this write fails, or worse, 2+ people were modifying the file
> using different means at the same time? You could potentially
> lose/corrupt your data and your system is potentially hosed, is it
> not? Why not write the contents out to a [sort of?] temporary file
> (even $progname.$$ would suffice probably, but that would have
> potential security implications so mktemp(1) might be the way to go),
> then move the temporary file to $file? You might also want to use
> lockf to lock the file.

I'll investigate lockf, however I think it's one of those things that you just 
live with (for example... what happens if two people issue a sysctl(8) call at 
the exact same time ... whoever gets there last sets the effective value).

You'll notice that I do all my work in memory...

If the buffer is empty, I don't write out the buffer.
Much in the way that if an in-line sed (with -i for example) will also check 
the memory contents before writing out the changes.

Since error-checking is performed, there's no difference between doing this on 
a temporary file (essentially the memory buffer is the temporary file -- safe 
for wierd scenarios where memory fails you -- but then you have bigger problems 
than possibly wiping out your rc.conf file -- like perhaps scribbling on the 
disk in new and wonderful ways during memory corruption).

Also, since the calculations are done in memory and the read-in is decidedly 
different than the write-out (read: not performed as a single command), if two 
scripts operated simultaneously, here's what would happen:

script A reads rc.conf(5)
script B does the same
script A operates on in-memory buffer
script B does the same
script A writes out new rc.conf from modified memory buffer
script B does the same

whomever does the last write will have their contents preserved. The unlucky 
first-writer will have his contents overwritten.

I do not believe the kernel will allow the two writes to intertwine even if 
firing at the exact same precise moment. I do believe that one will block until 
the other finishes (we could verify this by looking at perhaps the 
bourne-shell's '>' redirect operator to see if it flock's the file during the 
redirect, which it may, or perhaps such things are at lower levels).


> 
>> }
>> 
>> ############################################################ MAIN SOURCE
>> 
>> #
>> # Perform sanity checks
>> #
>> depend main '[' 'usage'
>> [ $# -gt 0 ] || usage
>> 
>> #
>> # Process command-line options
>> #
>> depend main 'while' '[' 'do' 'case' 'usage' 'eprintf' \
>>            'break' 'esac' 'shift' 'done'
>> while [ $# -gt 0 ]; do
> 
> Why not just use the getopts shell built-in and shift $(( $OPTIND - 1
> )) at the end?

^_^

Well, I see getopt is an external dependency (bad) while getopts appears to be 
a builtin.

I'll have a looksie and see, but I find the case statement to be very readable 
as it is.


> 
>>        case "$1" in
>>        -h|--help) usage;;
>>        -d) SYSRC_SHOW_DEPS=1;;
>>        -e) SHOW_EQUALS=1;;
>>        -n) SHOW_NAME=;;
>>        -*) eprintf "%s: unrecognized option \`$1'\n" "${progname:-$0}"
>>            usage;;
>>         *) # Since it is impossible (in many shells, including bourne, c,
>>            # tennex-c, and bourne-again) to name a variable beginning with a
>>            # dash/hyphen [-], we will terminate the option-list at the first
>>            # item that doesn't begin with a dash.
>>            break;;
>>        esac
>>        shift 1
>> done
>> [ "$SHOW_NAME" ] || SHOW_EQUALS=
>> 
>> #
>> # Process command-line arguments
>> #
>> depend main '[' 'while' 'do' 'case' 'echo' 'sysrc' 'if' 'sysrc_set' 'then' \
>>            'fi' 'esac' 'shift' 'done'
>> SEP=': '
>> [ "$SHOW_EQUALS" ] && SEP='="'
>> while [ $# -gt 0 ]; do
>>        NAME="${1%%=*}"
>>        case "$1" in
>>        *=*)
>>                echo -n "${SHOW_NAME:+$NAME$SEP}$(
>>                         sysrc "$1" )${SHOW_EQUALS:+\"}"
>>                if sysrc_set "$NAME" "${1#*=}"; then
>>                        echo " -> $( sysrc "$NAME" )"
>>                fi
> 
> What happens if this set fails :)? It would be confusing to end users
> if you print out the value (and they expected it to be set), but it
> failed for some particular reason.

No more confusing than sysctl(8) which does the same thing as I did (I was 
in-fact mimicking sysctl(8) in this behavior).



> 
>>                ;;
>>        *)
>>                if ! IGNORED="$( sysrc "$NAME?" )"; then
>>                        echo "${progname:-$0}: unknown variable '$NAME'"
>>                else
>>                        echo "${SHOW_NAME:+$NAME$SEP}$(
>>                              sysrc "$1" )${SHOW_EQUALS:+\"}"
> 
>    Not sure if it's a gmail screwup or not, but is there supposed to
> be a newline between `$(' and `sysrc' ?

Not a screw-up....

Since what appears between $( ... ) (back-ticks too `...`) is read using 
readline(3), any leading whitespace is ignored.

I'm using this technique to split the line because it was too long to be 
accommodated-fully within an 80-character wide terminal window with tab-width 
set to 8 (what nearly everybody defaults to these days).




>    And now some more important questions:
> 
>    1. What if I do: sysrc PS1 :) (hint: variables inherited from the
> shell really shouldn't end up in the output / be queried)?

Great question... hadn't thought of that.

I could perhaps use a set(1) flag to clear the environment variables prior to 
calling source_rc_confs. That seems to be a prudent thing to do (or if not via 
set(1) built-in, via preening the list of current variables and using unset 
built-in to kill them off in a for/in/do loop).


>    2. Could you add an analog for sysctl -a and sysctl -n ?

The `-n' is already covered (see usage).

I do agree `-a' is both warranted and highly useful (provides system 
administrator a snapshot of what /etc/rc sees at boot after performing a 
source_rc_confs -- great for either trouble-shooting boot problems or 
taint-checking everything before a reboot).


>    3. There are some more complicated scenarios that unfortunately
> this might not pass when setting variables (concerns that come to mind
> deal with user-set $rc_conf_files where values could be spread out
> amongst different rc.conf's, and where more complicated shell syntax
> would become a slippery slope for this utility, because one of the
> lesser used features within rc.conf is that it's nothing more than
> sourceable bourne shell script :)...). I would definitely test the
> following scenarios:
> 
> #/etc/rc.conf-1:
> foo=baz
> 
> #/etc/rc.conf-2:
> foo=bar
> 
> #/etc/rc.conf-3:
> foo="$foo zanzibar"
> 
> Scenario A:
> 
> #/etc/rc.conf:
> rc_conf_files="/etc/rc.conf-1 /etc/rc.conf-2"
> 
>    The value of foo should be set to bar; ideally the value of foo in
> /etc/rc.conf-2 should be set to a new value by the end user.
> 
> Scenario B:
> 
> #/etc/rc.conf:
> rc_conf_files="/etc/rc.conf-2 /etc/rc.conf-1"
> 
>    The value of foo should be set to baz; ideally the value of foo in
> /etc/rc.conf-1 should be set to a new value by the end user.
> 
> Scenario C:
> 
> #/etc/rc.conf:
> rc_conf_files="/etc/rc.conf-1 /etc/rc.conf-2 /etc/rc.conf-3"
> 
>    The value of foo should be set to `bar zanzibar'; ideally the
> value of foo in /etc/rc.conf-3 should be set to a new value by the end
> user (but that will affect the expected output potentially).
> 
> Scenario D:
> 
> #/etc/rc.conf:
> rc_conf_files="/etc/rc.conf-2 /etc/rc.conf-1 /etc/rc.conf-3"
> 
>    The value of foo should be set to `baz zanzibar'; ideally the
> value of foo in /etc/rc.conf-3 should be set to a new value by the end
> user (but that will affect the expected output potentially).
> 
>    I'll probably think up some more scenarios later that should be
> tested... the easy way out is to state that the tool does a best
> effort at overwriting the last evaluated value.
>    Overall, awesome looking tool and I'll be happy to test it
> Thanks!
> -Garrett

Well now....

If you really want to support ALL those possibilities... I _did_ have a more 
complex routine which caught them all (each and every one), but it wasn't quite 
as clean ^_^

If you really want me to break out the nuclear reactor, I'll work it back in 
from one of the predecessors of this script which was 1,000+ lines of code.

However, I found that the need to catch such esoteric conditions was 
far-out-weighed by the need to simplify the script and make a cleaner approach.

Yes, the rc.conf(5) scripts (whether we're talking about /etc/rc.conf, 
/etc/rc.conf.local, or ones that are appended by the end-user) can be quite 
complex beasts...

And we could see things like this...

foo=bar; bar=baz; baz=123

And the script would not be able to find the correct instance that needs to be 
replaced to get "bar" to be some new value.

My nuclear-physics-type script could handle those instances (using sed to reach 
into the line and replace only the baz portion and retain the existing foo and 
baz declarations.

What would you prefer though? Something that is cleaner, more readable, easier 
to digest, more efficient, and has fewer dependencies, or one that is more 
robust but may require a degree to digest?
--
Cheers,
Devin Teske

-> CONTACT INFORMATION <-
Business Solutions Consultant II
FIS - fisglobal.com
510-735-5650 Mobile
510-621-2038 Office
510-621-2020 Office Fax
909-477-4578 Home/Fax
devin.te...@fisglobal.com

-> LEGAL DISCLAIMER <-
This message  contains confidential  and proprietary  information
of the sender,  and is intended only for the person(s) to whom it
is addressed. Any use, distribution, copying or disclosure by any
other person  is strictly prohibited.  If you have  received this
message in error,  please notify  the e-mail sender  immediately,
and delete the original message without making a copy.

-> END TRANSMISSION <-

_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"

Reply via email to