I seem to have backed myself into a corner and am looking for suggestions...

Our campus is largely RFC1918 internally. The original hub-and-spoke design was along the lines of assigning a 10.x.x.x/16 or larger block to significant buildings, so each building was it's own routed domain address block, e.g., 10.building.subnet.host.

This allows some "interesting" access control lists by using non-contiguous wildcard masks for certain things. If routers/switches are all on subnet zero, for example, you can permit access to them by using something like 'permit ip 10.0.0.0 0.255.0.255' and it covers all the buildings in one statement.

Life was good until we started down a VRF-lite path to isolate the infrastructure, common areas, and "isolated" functional areas into their own VRFs. So now we have things like:

  10.building.0.x   infrastructure (global VRF)
  10.building.16.x  general campus
  10.building.32.x  business users
  10.building.48.x  guest access
  10.building.64.x  private areas (e.g., security video)

For the most part, each VRF is it's own domain, but there are necessary "leaks" we need to manage between VRFs, and we're trying to do it with a FWSM.

Each VRF feeds a vlan into the FWSM, and I'm trying to define the "allowed" leakage. For example, network administrators need access to several VRFs, system administrators need access to several VRFs, and most all of the VRFs need access to the "outside".

There's no need for "real" NAT since the IP address space does not overlap, but I'm trying to use NAT control to define which VRFs can communicate with other VRFs.

I'd like to use identity NAT, but "only" between the allowed VRFs. But identity NAT defaults to ALL interfaces.

You can use a static identity NAT, but since NAT doesn't allow discontiguous network masks, there's a LOT of configuration to be done to cover the addresses in use (must duplicate for each building).

Is there a better way to accomplish this? (other than going back and renumbering IPs into a 10.VRF.building-subnet scheme that lends itself better to the problem at hand?)

Jeff
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to