[A few month's later due to being distracted by a pile of projects...]

Answering Jonathan's question below, no, the current firewall_context/ipf_method for NFS doesn't work for me.  Starting and stopping the related services has no impact on the rules I see loaded into IPF.  Tracing the code, I think that the reason it doesn't work happens inside generate_rules() on the line that reads:

     [ "$mypolicy" = "use_global" ] && return 0

As on my system, "use_global" is returned by`svcprop -p firewall_config/policy svc:/network/nfs/server:default`.

Below is my /etc/ipf/ipf.conf.   My goal is to have the `mountd` ports dynamically added to the rules for the logical interface "nge1.vlan2"...


=====START=====
pass out all keep state
block in all
block return-rst in log first proto tcp all
block return-icmp(host-unr) in log proto udp all

# Loopback - allow everything
pass in quick on lo0 all
pass out quick on lo0 all

# nge0 - allow everything
pass in quick on nge0 all
pass out quick on nge0 all

# nge1 - allow nothing (traffic only on sub-interfaces)
#pass in quick on nge1 all
#pass out quick on nge1 all

# nge1.vlan2 - allow NFS   (how to add `mountd` ports here?)
pass in quick on nge1.vlan2 proto udp from any to any port = 111 keep state
pass in quick on nge1.vlan2 proto tcp from any to any port = 111 keep state
pass in quick on nge1.vlan2 proto udp from any to any port = 2049 keep state
pass in quick on nge1.vlan2 proto tcp from any to any port = 2049 keep state
pass in quick on nge1.vlan2 proto udp from any to any port = 4045 keep state
pass in quick on nge1.vlan2 proto tcp from any to any port = 4045 keep state
=====STOP=====


Corresponding to these interfaces:

# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
nge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 10.0.1.24 netmask ffffff00 broadcast 10.0.1.255
        ether 0:25:90:2c:60:4a
nge1: flags=1000943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,IPv4> mtu 1500 index 3
        inet 0.0.0.0 netmask ffffffff
        ether 0:25:90:2c:60:4b
nge1.vlan2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 5
        inet 10.0.2.24 netmask ffffff00 broadcast 10.0.2.255
        ether 0:25:90:2c:60:4b
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128



Am I not suppose to by using a global policy?

Thanks,
Kent



On 10/17/13 4:32 AM, Jonathan Adams wrote:
well looking at svccfg for nfs/server I have:

firewall_context/ipf_method    astring  "/lib/svc/method/nfs-server ipfilter"

and in /lib/svc/method/nfs-server under the "ipfilter" section I have:

        # NFS related services are RPC. nfs/server has nfsd which has
        # well-defined port number but mountd is an RPC daemon.
        #
        # Essentially, we generate rules for the following "services"
        #  - nfs/server which has nfsd and mountd
        #  - nfs/rquota
        #
        # The following services are enabled for both nfs client and
        # server so we'll treat them as client services and simply
        # allow incoming traffic.
        #  - nfs/status
        #  - nfs/nlockmgr
        #  - nfs/cbd

and:

                tports=`$SERVINFO -R -p -t -s "mountd" 2>/dev/null`
                if [ -n "$tports" ]; then
                        for tport in $tports; do
                                generate_rules $FMRI $policy "tcp" $ip \
                                    $tport $file
                        done
                fi

                uports=`$SERVINFO -R -p -u -s "mountd" 2>/dev/null`
                if [ -n "$uports" ]; then
                        for uport in $uports; do
                                generate_rules $FMRI $policy "udp" $ip \
                                    $uport $file
                        done
                fi

does this not work for you as expected?


On 17 October 2013 05:06, Kent Watsen <[email protected]> wrote:

I want to export NFS from my SAN to some machines in my DMZ, which are in a different VLAN.  To ensure only the NFS ports are visible, I want to use host-based firewall (IPF) to block all other ports, which is easy to do since I can specify the VLAN interface in the IPF rules.

Unfortunately, I use OpenBSD in the DMZ and it does not support NFSv4, so I have to use V3 instead, which entails having to deal with `mountd` random ports.  I'd rather not open ports 2^15 - 2^16-1, and noticed that the ports reported by `rpcinfo -p localhost | grep mountd` only change each time I execute `svcadm restart svc:/network/nfs/server:default`.

I'm wondering how easy it might be to dynamically update the IPF rules immediately after `svcadm restart svc:/network/nfs/server:default` is executed?  - is there an SMF trick that doesn't involve hacking /lib/svc/manifest/network/nfs/server.xml?

Would the best approach be to create a new SMF service definition in |/etc/svc/profile/site |that depends on svc:/network/nfs/server:default?   Has anybody dynamically updated IPF rules like this before?  - any gotchas to be aware of?

Thanks,
Kent
||


-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/23508059-3f15f76a
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com

illumos-discuss | Archives | Modify Your Subscription

illumos-discuss | Archives | Modify Your Subscription

Reply via email to