On Oct 27, 2010, at 7:41 PM, james woodyatt wrote:

> On the other hand, there is another cross-cutting concern, i.e. applications 
> that rely on their hosts using Dynamic DNS but need to be made transparent in 
> the face of NAT66. The B2B usage scenario for NAT66 implies that exterior 
> source addresses correspond to destinations in private enclaves.  Those 
> source addresses may be unique-local [RFC 4193], and in any case not publicly 
> reachable.  Therefore, they should not be published in the public DNS 
> horizon.  This means we probably need something like DNS66 alongside NAT66, 
> so that when a host advertises its interior address with Dynamic DNS, there 
> is a DNS66 process that transfers that registration into any private DNS 
> horizons as required.

Trying to make DNS smart enough to only expose addresses that are usable for a 
given peer strikes me as another slippery slope toward disaster.

Just to take one small example, it's simply not the case that a DNS resolver 
and the hosts that talk to it can be expected to be in the same realm(s).  
Especially not in a world where it's increasingly common for hosts to have 
multiple active network interfaces (real or pseudo).

(I don't even agree that ULIAs shouldn't be published in DNS, though I'll 
certainly agree that current address selection heuristics aren't sufficient to 
inform a choice between use of ULIAs vs. other kinds of addresses.)

Some useful principles:

- The network's job is to route packets transparently, not to second-guess the 
apps.  Network components that try to second-guess apps will inevitably create 
tussles.  This includes stateful NATs, ALGs, firewalls, traffic shapers, 
interception proxies, etc.
- Whenever the network can do a job well, generally the hosts and apps should 
let it do so.  That's just good separation of function.  The "best effort" 
routing in a traditional single-realm IP network with (in the vast majority of 
cases) one interface per host was a good example of this - as long as the 
network honestly did best effort routing, the hosts and apps were extremely 
unlikely to do better.  
- If the network can't do parts of its job well, it should let the apps try to 
do a better job rather than trying to prevent them from doing a better job.  At 
least the apps know what kind of service they need.

Trying to have DNS hide "unreachable" addresses is an example of trying to keep 
apps from doing a better job.  The DNS resolver doesn't know what's accessible 
to a particular host or app.

Keith

_______________________________________________
nat66 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nat66

Reply via email to