Tony Nguyen wrote:
> Dave,
>
> The incremental and revised full webrev are respectively at:
>
> http://cr.opensolaris.org/~tonyn/firewallDec112008-inc/
> http://cr.opensolaris.org/~tonyn/firewallDec112008/
>
> Some more closures to prior discussion below.
>
> The only portion of the code left to be reviewed are the ipf rule
> generation.
I haven't been over everything with a fine-tooth comb yet, but this
is the bulk of the remainder:
servinfo.c:
Since the error messages from this command could end up in a log
file, they should be internationalized. The normal output is only
ever consumed programmatically, so it isn't necessary to i18n it.
That said, you have several output formats, many of which look like
they are intended for humans. This is great for debugging, but you
don't need the variety, I'd try to keep it to one format.
servinfo.c`usage:
Open paren should be on a new line.
servinfo.c`svc_getrpcinfo:
Where does the number 12 (as in buf[12]) come from?
Instead of mallocing a struct rpcent, just put one on the stack
that you point rpc to if you can't resolve the rpc.
Though it is unlikely you'll run up against LINE_MAX, your code
could theoretically emit truncated results if strlcat was unable to
append the entire proto or port to 'line'. That could in turn
cause the firewall to malfunction in an insecure fashion. You
should be tracking the size of line and ensure there's sufficient
space *before* attempting to append the next port or protocol.
servinfo.c`uaddr2port:
I believe the universal address you get from rpcb_getmaps is
provided by the server. Though the server should be our rpcbind, I
don't think it's safe to assume the format of the string,
especially since universal addresses aren't limited to the formats
you list. This function needs to extract port numbers in a more
generic, robust way.
ipfd.c:
max_scf_value_size should be called max_scf_name_size.
ipfd.c`ipfilter_update:
This has three callers, all of whom emit a similar error message
and exit. Perhaps this could be put in ipfilter_update.
Additionally, they all determine failure based on the result of
ipfilter_update, which in turn only looks at whether fork and exec
succeeded. Since the fw_update subcommand of
/lib/svc/method/ipfilter can fail, shouldn't that be communicated
back to the caller?
ipfd.c`repository_event_wait:
You can get rid of the fmri length check entirely. Whatever result
you are getting from one scf function should be valid for passing
to another.
ipfilter.xml:
Some of your <description>s are too verbose. They should only a
sentence or two, and contain no line breaks. The pre-formatted
content you currently have will be mangled by consumers and should
be removed. e.g. for the 'apply_to' and 'exceptions' properties I
would say something like:
The host and network IPs, network interfaces, and ippools to deny
if the policy is set to deny, or accept if the policy is set to
accept.
You need to have <common_name>s in addition to <description>s.
The <description> for 'custom' should read:
Apply the custom ipfilter configuration stored in a custom file
(custom file property must be set).
The <description> for 'custom_policy_file' should read:
The file containing a custom ipfilter configuration to use if a
custom policy is in force.
The <description> 'open_ports' should read:
A set of ports to leave open regardless of firewall policy.
global.xml:
Since <common_name>s are used as labels, don't punctuate them with
periods.
When a property refers to itself, use "this property", not
"nameofthispg/nameofthisproperty". The latter requires
re-localization if the property changes, and is unnecessarily
unfriendly in an environment where the <common_name>s are used
instead of the SMF property names.
ipf_include.sh:
Instead of:
command
[ $? -ne 0 ] && return 1
(and similar constructs), you could just do:
command || return 1
Instead of elaborate counting logic to limit the number of
iterations of your while loops, just use 'for i in `seq 1 3`;
do...'.
Since get_policy is only used with IPF_FMRI explictly, I would have
a separate function or inline the act of getting the default
configuration. It doesn't make sense to force every call to
get_policy to test for IPF_FMRI.
get_exceptions: are there any callers who pass IPF_FMRI?
I recommend creating a function that returns the property group for
an FMRI, rather than having 'if IPF_FMRI do_something else
do_something' in each of your property-fetching functions.
Though the abstraction around inst_ipf_file and inst_nat_file is
nice, the fact that you almost always call both and each call needs
to recompute the FMRI path isn't. I recommend having callers use
fmri_to_file and provide suffix variables that they could use
instead. e.g. file=`fmri_to_file`; my_file=${file}${my_suffix}
service_is_online should use svcprop instead of parsing the output
of svcs.
I also don't understand why you're rejecting services that are in
transition, since you wait for the service to change and a
transition will inevitably occur if that wait isn't to be in vain.
Same comments apply to service_is_maintenance. These two should be
a single function that takes the name of the desired state as an
argument.
value_is_interface should return 0 or 1, and that should just be
tested by the shell. It doesn't make sense to test one exit status
only to convert it to a string which you then convert back to an
exit status.
get_IP could just do something like:
echo "$1" | sed -n -e 's,^pool:\(.*\),pool/\1,p' \
-e 's,^host:\(.*\),\1,p' \
-e 's,^network:\(.*\),\1,p'
In general, code that does
foo=`echo something | sed something`
echo "$foo"
could just do
echo something | sed something
validate_interface could just do 'dladm show-dev $1 > /dev/null
2>&1' and test the result.
While the abstractions for check_ipf_syntax and check_nat_syntax
are nice, there's no need to explicitly return 0 if the return
value is 0. Just call the program you're calling and let your
caller get the exit code of what you're calling.
In general, boolean functions should communicate their result
though their exit code, not through echoing 'true' or 'false'.
tuple_get_port does the work of splitting a port range into pieces,
but then throws the result away. If it is doing so to test the
format of the input, then you shouldn't be doing it again later.
port_get_lower/power_get_higher could be simpler:
All callers call both, so you're unnecessarily performing all
your sanity testing twice and processing twice. Make a single
function that sets/returns both lport and hport.
Perform a single test for well-formedness (assuming
tuple_get_port hasn't done so already):
echo $1 | grep '^[0-9]{1,5}-[0-9]{1,5}$'
Then just emit them in the correct order:
echo $1 | ( IFS=- read a b ; [ $a -lt $b ] && echo $a $b || echo $b $a )
ipf_get_lock is racy. A simpler and more robust lock file can be
created by creating a lockfile.$newpid that contains $newpid and
attempting to ln $IPF_LOCK to your lockfile.
Why does ipf_get_lock fail after 5 seconds? You're blocking on a
lock, and you know the process that holds the lock is live. Either
it will drop the lock, or it will die. Either way you eventually
get the lock.
custom_set_symlink/IPFILCONF:
You should put all dynamically created files and symlinks under
/var, leaving /etc alone. This is for two reasons:
1) Modifying files under /etc will frustrate users who strive to
have a read-only root filesystem. If the configuration of the
system isn't changing, the contents of /etc shouldn't be
changing either.
2) Customers with pre-existing configurations will be putting
them in /etc. In other words, your dynamically generated
files conflict with a part of the filesystem namespace people
are already using. Specifically, the #1 custom file people
will be supplying is /etc/ipf/ipf.conf. Selecting/testing a
non-custom configuration shouldn't override that
configuration.
Instead, you should generate your configuration files into
VAR_IPF_DIR, and change the ipfilter service to read its
configuration from there. Those can then symlink back to /etc if
the customer has put files there.
replace_file:
This is always given a regular "new" file; your handling of the
new-is-link case is unnecessary (and doesn't look correct either).
If the "orig" file is a symlink, I wouldn't bother creating a
backup. Moreover, since in all cases "orig" should be a generated
file, I don't see why storing a backup is that worthwhile.
Finally, if copying "new" to "orig" fails, it means you won't be
loading the correct rules, meaning the caller should fail, too.
generate rules:
Shouldn't the "none" policy specify that packets should be to
${ip}?
When the policy is "use_global", why do you generate a per-service
copy of the global rules? Why not just fall back to the global
definitions?
This function does essentially the same thing 4 times, yet has 4
copies of the code to do it. And the same idiom appears in the
global rule generation routines.
process_service:
`something > /dev/null` is a strange construct; I'm not sure what
you're trying to do, but I know that it will cause some versions of
ksh93 to crash.
create_services_rules:
When you return from the loop, you're not removing the lock file.
Dave
_______________________________________________
networking-discuss mailing list
[email protected]